CN116764236A - Game prop recommending method, game prop recommending device, computer equipment and storage medium - Google Patents

Game prop recommending method, game prop recommending device, computer equipment and storage medium Download PDF

Info

Publication number
CN116764236A
CN116764236A CN202210230129.6A CN202210230129A CN116764236A CN 116764236 A CN116764236 A CN 116764236A CN 202210230129 A CN202210230129 A CN 202210230129A CN 116764236 A CN116764236 A CN 116764236A
Authority
CN
China
Prior art keywords
prop
feature
game
test sample
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210230129.6A
Other languages
Chinese (zh)
Inventor
林文清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Computer Systems Co Ltd
Original Assignee
Shenzhen Tencent Computer Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Computer Systems Co Ltd filed Critical Shenzhen Tencent Computer Systems Co Ltd
Priority to CN202210230129.6A priority Critical patent/CN116764236A/en
Publication of CN116764236A publication Critical patent/CN116764236A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a game prop recommending method, a game prop recommending device, computer equipment and a storage medium. The method involves artificial intelligence, comprising: when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining a prop recommendation model by training target features meeting the feature importance requirements, and determining each target feature according to a prediction tag output by the tag prediction model. And respectively extracting target player characteristics, target interaction characteristics and target prop characteristics from game progress data, game prop attribute data and player attribute data of the current game instance, inputting the target player characteristics, the target interaction characteristics and the target prop characteristics into a prop recommendation model to output game recommendation props, and displaying the game recommendation props on a game interface of the current game instance. By adopting the method, the feature dimension required by the training model can be reduced, the feature noise is removed, the prediction result of the prop recommendation model is improved, and the recommendation success rate of the game props is further improved.

Description

Game prop recommending method, game prop recommending device, computer equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a game prop recommending method, a game prop recommending device, computer equipment and a storage medium.
Background
With the development of artificial intelligence technology and the gradual popularization of network game operation services, game props are required to be recommended and sold to game users in terms of various game application programs or application platforms and the like, so that benefits are obtained. In order to ensure that game props are successfully recommended to game users, the purchase rate of the game props is improved, and the game props with high purchase will of the game users are required to be determined and recommended to the game users.
In the conventional technology, a history purchase record of a game user is obtained, the game user is classified according to consumption capability based on the history purchase record, and prop purchase preference of each type of game user is determined. For example, when it is determined that the game users belong to a user class with high consumption, prop purchase conditions of all game users under the user class are obtained, prop purchase preferences of all users are ranked, one or more game props with prop purchase preferences ranked in front are determined, and finally recommendation is performed to the game users corresponding to the user class.
However, in the conventional analysis method based on the historical purchasing records, only the consumption capability of the game user is considered, and only the existing historical data is processed, so that the game progress cannot be followed in real time, and the requirements of the game user are considered under the current game progress, and the recommended game props are purchased for many times without purchasing any more. Therefore, the traditional game prop preference sorting and prop recommending modes still have the problems of low recommending success rate and low platform income.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a game play object recommendation method, apparatus, computer device, and storage medium that can improve game play object recommendation success rate and platform revenues.
In a first aspect, the application provides a game prop recommendation method. The method comprises the following steps:
when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining prop recommendation models, wherein the prop recommendation models are obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by a tag prediction model;
Collecting game progress data, game prop attribute data and player attribute data of a current game instance;
extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data;
and determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
In one embodiment, after training to obtain the prop recommendation model according to each target feature and the corresponding feature value, the method further includes:
acquiring feature labels marked in advance for each target feature, and determining a preset recommended prop according to the feature labels marked in advance;
determining a training loss value according to the preset recommended props and the game recommended props;
and determining the prediction accuracy of the prop recommendation model based on the training loss value.
In a second aspect, the application further provides a game prop recommending device. The device comprises:
The prop recommendation model acquisition module is used for acquiring a prop recommendation model when detecting that the real-time game progress of the current game instance meets prop recommendation conditions, wherein the prop recommendation model is obtained by training target features meeting feature importance requirements, and each target feature is determined according to a prediction tag output by the tag prediction model;
the data acquisition module is used for acquiring game progress data, game prop attribute data and player attribute data of the current game instance;
the target feature extraction module is used for extracting target player features from the player attribute data, extracting target interaction features from the game progress data and extracting target prop features from the game prop attribute data;
and the game recommendation prop display module is used for determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
When the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining prop recommendation models, wherein the prop recommendation models are obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by a tag prediction model;
collecting game progress data, game prop attribute data and player attribute data of a current game instance;
extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data;
and determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining prop recommendation models, wherein the prop recommendation models are obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by a tag prediction model;
Collecting game progress data, game prop attribute data and player attribute data of a current game instance;
extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data;
and determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining prop recommendation models, wherein the prop recommendation models are obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by a tag prediction model;
collecting game progress data, game prop attribute data and player attribute data of a current game instance;
Extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data;
and determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
In the game prop recommending method, the device, the computer equipment and the storage medium, when the real-time game progress of the current game instance is detected to meet prop recommending conditions, a prop recommending model is obtained, wherein the prop recommending model is trained by utilizing target features meeting the feature importance requirements, and the target features are determined according to the prediction tags output by the tag predicting model. The game process data, the game prop attribute data and the player attribute data of the current game instance are collected, target player characteristics are extracted from the player attribute data, target interaction characteristics are extracted from the game process data, target prop characteristics are extracted from the game prop attribute data, then the target player characteristics, the target prop characteristics and the target interaction characteristics are determined to be input data of a prop recommendation model, game recommendation props are output based on the prop recommendation model, and the game interface of the current game instance is displayed. The method has the advantages that the target characteristics meeting the requirements can be determined based on the feature importance, then the prop recommendation model is obtained through training according to the target characteristics instead of training by adopting all the characteristics, feature dimensions required by the training model are reduced, feature noise is removed, the prediction result of the prop recommendation model obtained through training is improved, the output game recommendation prop meets the actual requirements of the current game process better, the recommendation success rate of the game prop is further improved, and the platform income is increased.
Drawings
FIG. 1 is an application environment diagram of a game item recommendation method in one embodiment;
FIG. 2 is a flow chart of a method of game play object recommendation in one embodiment;
FIG. 3 is a schematic diagram of a prop recommendation interface of a game prop recommendation method in one embodiment;
FIG. 4 is a schematic flow chart of obtaining a prop recommendation model by training target features meeting feature importance requirements in one embodiment;
FIG. 5 is a schematic diagram of a feature importance calculation process in one embodiment;
FIG. 6 is a schematic diagram of a feature quantity optimization effect of a prop recommendation model in one embodiment;
FIG. 7 is a flow chart of outputting predictive labels corresponding to features of test samples in one embodiment;
FIG. 8 is a flow chart of determining a value range corresponding to each test sample feature in one embodiment;
FIG. 9 is a flow chart of a method for recommending game play objects according to another embodiment;
FIG. 10 is a block diagram of a game play object recommendation device in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides a game prop recommending method, which relates to an artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The game prop recommending method provided by the embodiment of the application relates to artificial intelligence machine learning and other technologies, and can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. When the server 104 detects that the real-time game progress of the current game instance meets the prop recommendation condition, a prop recommendation model is acquired, and game progress data, game prop attribute data and player attribute data of the current game instance are collected. The game progress data, the game prop attribute data and the player attribute data of the current game instance may be stored in a local storage of the terminal 102, or may be stored in a data storage system corresponding to the server. Server 104 may then extract target player characteristics from the player attribute data, target interaction characteristics from the game progress data, and target prop characteristics from the game prop attribute data. The prop recommendation model is obtained by training target features meeting the feature importance requirements, the target features are obtained by determining according to the prediction tags output by the tag prediction model, and then the target player features, the target prop features and the target interaction features are determined as input data of the prop recommendation model, game recommendation props are output based on the prop recommendation model, and are displayed on a game interface of the terminal 102 for viewing by players. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In one embodiment, as shown in fig. 2, a game prop recommending method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step S202, when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, a prop recommendation model is obtained, the prop recommendation model is trained by target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by the tag prediction model.
Specifically, by acquiring the real-time game progress of the current game instance and judging whether the real-time game progress of the current game instance meets the prop recommendation condition or not. When the real-time game progress meets the prop recommendation conditions, a trained prop recommendation model is obtained, so that a recommended prop is output and displayed through the prop recommendation model, and a game player views or purchases the prop.
Further, judging whether the prop recommending condition is met or not can be achieved by judging whether the real-time game progress of the current game instance enters a prop recommending scene, such as a lucky air drop scene, a material return scene and the like in the game instance, when a certain game instance enters the scenes, the real-time game progress of the current game instance is indicated to meet the prop recommending condition, and further, under the condition that prop recommending is met, a prop recommending purchasing page is popped up on a game interface, and recommended props determined according to the prop recommending model are displayed for viewing and purchasing by game players.
In one embodiment, the method for obtaining the prop recommendation model by training the target features meeting the feature importance requirement comprises the following steps:
collecting training sample characteristics, and training to obtain a label prediction model according to the training sample characteristics and corresponding sample labels; collecting test sample characteristics, and outputting a prediction label corresponding to each test sample characteristic based on a label prediction model; determining target features meeting the feature importance requirements based on each prediction tag; and training to obtain a prop recommendation model according to each target feature and the corresponding feature value.
Specifically, the training sample features are obtained by collecting player features, prop features and interaction features and performing splicing processing according to the collected player features, prop features and interaction features, and sample labels are labeled for the training sample features in advance. The pre-labeled sample tags include a property tag purchased by the game player and a property tag not purchased by the game player, for example, if the game player a purchases the property a, the sample tag corresponding to the game player a is 1, that is, the game player has purchased the property tag, whereas if the game player a does not purchase the property a, the sample tag corresponding to the game player a is 0, that is, the game player has not purchased the property tag.
The splicing process of the player characteristic, the prop characteristic and the interactive characteristic can be realized in a vector splicing operator mode, for example, the player characteristic is X i (U) prop feature X i (I) The interaction characteristic is X i (Q) thenThe following formula (1) is represented:
wherein,,the operator representing the stitching of vectors, s, k, t are then the number of different features, such as player, prop, and interaction features.
Further, the initial neural network model can be trained according to the characteristics of each training sample and the corresponding sample label, and a label prediction model after training is obtained. After the tag prediction model is obtained, the tag prediction model needs to be tested to further determine target features which have greater influence on the tag prediction model, namely, target features which meet the feature importance requirements. And further, training the initial neural network model again according to the target features meeting the feature importance requirements and feature values corresponding to the target features to obtain a trained prop recommendation model.
Step S204, collecting game progress data, game prop attribute data and player attribute data of the current game instance.
Specifically, after the trained prop recommendation model is obtained, game progress data, game prop attribute data and player attribute data of the current game instance are further collected, wherein the game progress data comprise the current game progress of the game instance, such as whether the game progress belongs to a field animation stage or a game operation stage, whether a fortunate air drop scene or a material return scene is entered currently or not, and interaction features between a game player and a prop, such as whether the game player uses or views the prop, or the number of times the game player uses a prop in the instance, and the like, and the interaction features do not comprise purchase behaviors of the game player on the prop.
Further, the game prop attribute data comprises prop attributes and purchase characteristics, wherein the prop attributes comprise prop types, prop functions, prop prices and the like, and the purchase characteristics comprise purchase quantity of single props, purchase data ranking in all props and the like.
Similarly, the player attribute data includes account number characteristics, active characteristics, payment characteristics, social characteristics and the like, wherein the account number characteristics correspond to personal information of the game player, including game registration time, game gender, login equipment, login channel and the like, the active characteristics include online time, online time period, game grade, game play count time, game play count, game play win rate, game mode preference, game map mode preference and the like, the payment characteristics include each payment amount, payment count, primary payment time, single payment maximum amount and the like of the game player, and the social characteristics include game friend number, chat count, sharing count, gift giving count, team forming count and the like of the game player.
The login channels in the account feature may be different communication software accounts or application accounts related to the current game instance, the online time period in the active feature may include morning, noon, afternoon, evening, early morning, working day, weekend, and the like, the game mode preference may be a matching mode or a ranking mode, and the map mode preference may include different map modes such as a desert map, a city map, and a rainforest map.
Step S206, extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data.
Specifically, the target features conforming to the feature importance degree include target player features, target interaction features and target prop features, and a prop recommendation model can be trained according to the target player features, the target interaction features and the target prop features. When the prop recommendation model is used for recommending the props, the target player characteristics are required to be extracted from the acquired player attribute data, the target interaction characteristics are required to be extracted from the game progress data, the target prop characteristics are required to be extracted from the game prop attribute data, and the extracted target characteristics are further used as input data of the prop recommendation model so as to output game recommendation props based on the trained prop recommendation model.
Step S208, determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of a prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
Specifically, when a prop recommendation model is used for recommending props, it is required to extract target player characteristics from the acquired player attribute data, extract target interaction characteristics from the game progress data, and extract target prop characteristics from the game prop attribute data, so that the target player characteristics, the target prop characteristics and the target interaction characteristics are determined as input data of the prop recommendation model, and game recommendation props corresponding to the input data can be output based on the prop recommendation model.
After determining the game recommended prop, when entering a lucky air drop scene of the game instance, ejecting a prop recommended purchase page on a game interface of the current game instance, and displaying the game recommended prop on the prop recommended purchase page.
In one embodiment, as shown in fig. 3, a prop recommending interface of a game prop recommending method is provided, and referring to fig. 3, it can be known that when entering a lucky air drop scene in a game instance, a real-time game progress of a current game instance meets a prop recommending condition, and then a prop recommending purchasing page shown in fig. 3 is popped up on the game interface of the current game instance under the condition that the prop recommending condition is met.
The game recommendation props output according to the trained prop recommendation model are displayed on prop recommendation purchase pages, wherein different props such as clothes, tools, medicines or game skins are included, for example, 3 recommendation props are determined from a game prop pool, and are displayed. Meanwhile, in the lucky air drop scene of the game instance, corresponding specified purchasing duration is set, namely, under the corresponding specified purchasing duration, the user can view and purchase the displayed game recommendation prop.
In one embodiment, for a game instance in a lucky air drop scene, a prop recommendation model adopted during prop recommendation is specifically obtained by sequencing according to the importance degree to obtain a feature importance degree sequence, selecting the first 200 features in the feature importance degree sequence, and performing model training according to the first 200 selected features. In the model training process, feature data used in the model can be effectively reduced, and the prop recommendation success rate is improved, and the method can be particularly embodied in improvement of the purchase rate of a game player, for example, the purchase rate is improved by 8% compared with the prior recommendation model.
In the game prop recommending method, when the real-time game progress of the current game instance is detected to meet prop recommending conditions, a prop recommending model is obtained, wherein the prop recommending model is trained by target features meeting the feature importance requirements, and the target features are determined according to the prediction tags output by the tag predicting model. The game process data, the game prop attribute data and the player attribute data of the current game instance are collected, target player characteristics are extracted from the player attribute data, target interaction characteristics are extracted from the game process data, target prop characteristics are extracted from the game prop attribute data, then the target player characteristics, the target prop characteristics and the target interaction characteristics are determined to be input data of a prop recommendation model, game recommendation props are output based on the prop recommendation model, and the game interface of the current game instance is displayed. The method has the advantages that the target characteristics meeting the requirements can be determined based on the feature importance, then the prop recommendation model is obtained through training according to the target characteristics instead of training by adopting all the characteristics, feature dimensions required by the training model are reduced, feature noise is removed, the prediction result of the prop recommendation model obtained through training is improved, the output game recommendation prop meets the actual requirements of the current game process better, the recommendation success rate of the game prop is further improved, and the platform income is increased.
In one embodiment, as shown in fig. 4, the step of training to obtain a prop recommendation model by using the target features meeting the feature importance requirement specifically includes:
step S402, acquiring training sample characteristics, and training to obtain a label prediction model according to each training sample characteristic and a corresponding sample label.
Specifically, by collecting training data samples, obtaining training sample characteristics corresponding to each training data sample, and first characteristic values corresponding to each training sample, so as to obtain a training sample characteristic set, and further respectively labeling corresponding sample labels for each training sample characteristic, a label prediction model can be obtained through training according to the training sample characteristic set and the sample labels corresponding to each training sample characteristic.
The training sample features are obtained by feature stitching according to player features, prop features and interaction features. For example, n training data samples are collected, then for each training data sample O i Wherein, i is more than or equal to 1 and less than or equal to n, and each training data sample O i The corresponding training sample is characterized by X i ,X i =(x i,1 ,…x i,m ) And training sample feature X i The corresponding sample label is y i
Further, it is necessary to perform the data processing according to the training data samples O i Corresponding training sample feature X i Features X of each training sample i Corresponding first characteristic value and sample label as y i Training to obtain a label predictive model f (X i ). Wherein, training is performed to obtain a label prediction model f (X i ) Is to fit sample labels based on sample features and output predictive labels f (x i ) The optimization objective of the label prediction model is to minimize f (X i ) And y i The difference between them, i.e. min f (X) i )-y i The purpose of I.
In one embodiment, the trained label prediction model is specifically represented by the following formula (2):
wherein σ represents an activation function, such as a sigmoid function Representing the transpose of the model parameter vector, and b representing the model bias parameters, which are common parameters of machine learning models.
Step S404, collecting test sample characteristics and outputting predicted labels corresponding to the test sample characteristics based on the label prediction model.
Specifically, after training to obtain a label prediction model, testing the label prediction model is needed to judge whether the label test model can be used for label prediction of data characteristics, and then test data samples are collected and test sample characteristics corresponding to each test data sample are obtained. The test sample features are also obtained by feature stitching according to player features, prop features and interaction features.
The method comprises the steps of acquiring the characteristics of each test sample, acquiring a second characteristic value corresponding to each test sample characteristic, determining each test sample characteristic and the corresponding second characteristic value as input data of a label prediction model, and outputting a corresponding first prediction label f (X) based on the label prediction model i )。
Further, since it is required to determine the target features, that is, determine which features have a greater influence on the prediction effect of the tag prediction model, it is required to randomly select a predetermined amount of test sample features from the test sample features, and replace feature values of the test sample features to obtain updated test sample features, and further determine the updated test sample features and corresponding second feature values as input data of the tag prediction model, so as to output second prediction tags corresponding to the updated test sample features based on the tag prediction model.
Wherein, a preset amount of test data samples can be randomly extracted, and each test sample characteristic x of the test data samples is used for i,j To form new test sample featuresInputting the new test sample characteristics and the corresponding characteristic values into a label prediction model, and outputting second prediction labels corresponding to the updated test sample characteristics
Step S406, determining the target features meeting the feature importance requirements based on the prediction labels.
Specifically, based on the first prediction tag and the second prediction tag, a prediction difference value corresponding to each test sample feature is determined, feature importance calculation processing is performed according to the prediction difference value, and feature importance corresponding to each test sample feature is determined. And screening the characteristics of each training sample according to the preset characteristic importance requirements, and determining target characteristics of which the characteristic importance meets the characteristic importance requirements.
The first predictive label and the second predictive label are used for determining predictive difference values corresponding to the features of each training sample, specifically, absolute values of differences between the first predictive label and the second predictive label, namely, predictive difference values need to be calculatedAnd then, carrying out feature importance calculation processing according to the predicted difference value, and determining feature importance corresponding to the features of each test sample.
Specifically, the average value or variance of the predicted difference value can be calculated as a measure of the influence degree of the test sample feature on the model prediction result of the label prediction model, that is, the feature importance of the test sample feature is understood.
Wherein the predicted variance value can be calculatedThe average value of (2) is used as the feature importance of the features of the test sample, namely, the feature importance p is calculated by adopting the following formula (3):
where n is the number of samples of the test sample data.
In one embodiment, as shown in FIG. 5, a feature importance calculation process is provided, and as can be appreciated with reference to FIG. 5, the feature may be based on each test sample collected, including X 1 、X 2 、X 3 、…X n And the label prediction model, and outputting to obtain a first prediction label f (X i ) The collected test sample features are randomly extracted and then the feature values are replaced to obtain updated test sample features, such as X 1 Performing characteristic numerical value replacement to obtain X 1 ' the updated test sample is characterized as X 1 ′、X 2 、X 3 、…X n Further, according to the updated test sample characteristics and the label prediction model, a second prediction label can be output and obtained
In one embodiment, after determining the feature importance level corresponding to each test sample feature, the target feature of which the feature importance level meets the feature importance level requirement is determined by acquiring a preset feature importance level requirement and screening each training sample feature according to the preset feature importance level requirement.
Specifically, the preset feature importance requirement may be that after the feature importance used is ranked according to the size, the preset features ranked at the top are extracted as target features. The feature with the feature importance of 60 to 85 percent before the ranking is selected as the target feature in the feature importance ranking from big to small. And in consideration of a better prediction effect of the label prediction model, the feature corresponding to the feature importance of 70% before sorting can be further selected as the target feature.
Step S408, training to obtain a prop recommendation model according to each target feature and the corresponding feature value.
Specifically, training the initial neural network model again according to the determined target features meeting the feature importance requirements and feature values corresponding to the target features to obtain a trained prop recommendation model.
In this embodiment, a label prediction model is obtained by collecting training sample features and training according to each training sample feature and a corresponding sample label, test sample features are collected, and based on the label prediction model, a prediction label corresponding to each test sample feature is output, and further, based on each prediction label, target features meeting the feature importance requirements are determined, so that a prop recommendation model is obtained by training according to each target feature and corresponding feature values. The training and prediction of the initial neural network model and the secondary training according to the target characteristics are realized, so that the prop recommended model is finally obtained through training instead of training by adopting all the characteristics, the characteristic dimension required by the training model is reduced, the characteristic noise is removed, and the prediction result of the prop recommended model obtained through training is improved.
In one embodiment, after training to obtain the prop recommendation model according to each target feature and the corresponding feature value, the method further comprises:
acquiring feature labels marked in advance for each target feature, and determining a preset recommended prop according to the feature labels marked in advance; determining a training loss value according to a preset recommended prop and a game recommended prop; and determining the prediction accuracy of the prop recommendation model based on the training loss value.
Specifically, by acquiring feature labels labeled in advance for each target feature and determining a preset recommended prop according to the feature labels labeled in advance, wherein the preset recommended prop can be understood as a true value, and a game recommended prop divided according to a prop recommendation model can be understood as a predicted value, and further, training loss value calculation is performed according to the true value and the predicted value, so that the prediction effect of the prop recommendation model is measured according to the training loss value, and the prediction accuracy of the prop recommendation model is determined.
Further, the training loss value loss may be calculated by the following formula (4):
wherein y is i Is a true value, namely, the recommended prop is preset, y' i Is a predicted value, i.e., a game recommended prop that is divisor based on the prop recommendation model.
In one embodiment, as shown in fig. 6, a schematic view of feature quantity optimization effect of a prop recommendation model is provided, and as can be seen from fig. 6, training, testing and secondary training are performed on an original integrated model or a random forest model to obtain a trained integrated model or a random forest model, wherein the number of target features which can be used for secondary training is screened out and is smaller than the feature quantity required to be considered in common training, so that training efficiency of the model can be improved, and training time of the model is shortened. Meanwhile, the prop recommendation success rate of the integrated model and the random forest model obtained through training according to the screened target features is improved.
In this embodiment, the training loss value is determined by acquiring feature tags labeled in advance for each target feature, determining a preset recommended prop according to the feature tags labeled in advance, further determining a training loss value according to the preset recommended prop and the game recommended prop, and determining the prediction accuracy of the prop recommendation model based on the training loss value. The prediction accuracy of the prop recommendation model is determined by measuring the prediction effect of the prop recommendation model according to the training loss value, and then when the prediction accuracy does not meet the requirement, adjustment or retraining can be performed in time so as to improve the recommendation success rate of the applied prop recommendation model.
In one embodiment, as shown in fig. 7, the step of outputting the prediction label corresponding to each test sample feature, that is, collecting the test sample feature, and outputting the prediction label corresponding to each test sample feature based on the label prediction model specifically includes:
step S702, collecting test sample characteristics, obtaining second characteristic values corresponding to the test sample characteristics, and performing characteristic splicing on the test sample characteristics according to the collected player characteristics, prop characteristics and interaction characteristics.
Specifically, after training to obtain a tag prediction model, the tag prediction model needs to be tested to determine whether the tag test model can be used for tag prediction of data features, and then test sample features need to be obtained.
And acquiring the second characteristic values corresponding to the characteristics of each test sample while acquiring the characteristics of each test sample.
Step S704, determining each test sample feature and the corresponding second feature value as input data of the tag prediction model, and outputting the corresponding first prediction tag based on the tag prediction model.
Specifically, each test sample feature and the corresponding second feature value are determined as input data of a label prediction model, each test sample feature and the corresponding second feature value are input into the trained label prediction model, and the corresponding first prediction label is output.
Step S706, randomly selecting a preset amount of test sample characteristics from the test sample characteristics, and replacing the characteristic values of the test sample characteristics to obtain updated test sample characteristics.
Specifically, since the target features need to be determined, that is, it is determined which features have a greater influence on the prediction effect of the label test prediction model, a preset amount of test sample features need to be randomly selected from the test sample features, and feature value replacement is performed on the test sample features, so that updated test sample features are obtained.
Further, it is specifically required to randomly select a predetermined amount of test sample features from the test sample features, and determine a value range corresponding to each test sample feature according to the second feature value of each test sample feature. And further determining replacement feature values corresponding to the test sample features from the value range, and replacing the feature values of the test sample features according to the replacement feature values to obtain updated test sample features. The method can be used for determining the replacement characteristic values corresponding to the test sample characteristics randomly from the value range.
Step S708, outputting a second prediction label corresponding to each updated test sample feature based on the label prediction model.
Specifically, each updated test sample feature and the corresponding replacement feature value are determined as input data of the tag prediction model, and then a second prediction tag corresponding to each updated test sample feature can be output through the tag prediction model.
In this embodiment, by collecting the characteristics of the test sample, obtaining the second characteristic values corresponding to the characteristics of each test sample, determining each test sample characteristic and the corresponding second characteristic value as input data of the tag prediction model, and outputting the corresponding first prediction tag based on the tag prediction model. And randomly selecting a preset amount of test sample characteristics from the test sample characteristics, and replacing characteristic values of the test sample characteristics to obtain updated test sample characteristics so as to output second prediction labels corresponding to the updated test sample characteristics based on the label prediction model. The method and the device realize the test of the tag prediction model and the secondary test of the tag prediction model according to the test sample characteristics after the characteristic numerical value replacement so as to determine target characteristics later, and are used for finally training according to the target characteristics to obtain the prop recommendation model instead of training by adopting all the characteristics, so that the characteristic dimension required by the training model is reduced, the characteristic noise is removed, and the prediction result of the prop recommendation model obtained by training is improved.
In one embodiment, as shown in fig. 8, the step of determining the value range corresponding to each test sample feature, that is, determining the value range corresponding to each test sample feature according to the second feature value of each test sample feature, specifically includes:
step S802, obtaining the feature type of each test sample feature; feature types include discrete features and continuous features.
Specifically, each test sample feature needs to be obtained, including a prop feature, a player feature and an interaction feature, where the prop feature includes prop attributes and purchase features, the prop attributes include information such as prop types, prop functions and prop prices, and the purchase features include information such as purchase quantity of individual props and purchase data ranking in all props. The player characteristics comprise account characteristics, activity characteristics, payment characteristics, social characteristics and the like, wherein the account characteristics correspond to personal information of a game player, the personal information comprises game registration time, game gender, login equipment, login channels and the like, the activity characteristics comprise data of online time, online time period, game grade, game play frequency, game play win rate, game mode preference, game map mode preference and the like of the game player, the payment characteristics comprise data of each payment amount, payment frequency, primary payment time, maximum payment amount and the like of the game player, and the social characteristics comprise data of game friend number, chat frequency, sharing frequency, gift giving frequency, team forming frequency and the like of the game player. The interactive features include whether the game player used or viewed this prop, or the number of times the game player used a particular prop in the instance, etc., and the interactive features do not include the game player's purchase of the prop.
Further, feature type division is performed on prop features, player features and interaction features, whether each feature belongs to discrete features or continuous features is determined, the discrete features comprise gender, city and the like, the continuous features comprise price, age and the like, wherein the discrete features have limited value space, and the value space of the continuous features is infinite.
Step S804, based on each test sample feature and the feature type, respectively setting corresponding value space, and adding corresponding upper storage limit for the value space.
Specifically, based on each test sample feature and the type of the feature, initializing a corresponding value space, i.e., initializing a value space, for each test sample featureAnd corresponding storage upper limits are set for the value space, such as 1000, 2000 and other different values, and the values can be adjusted according to actual requirements. />
Step S806, traversing each test sample feature to obtain different feature values corresponding to each test sample feature.
Specifically, by traversing each test sample feature, different feature values corresponding to each test sample feature, namely different feature values of each test sample feature in historical game progress data or current game progress data, are obtained.
Step S808, judging whether the value space reaches the upper storage limit.
Step S810, when the value space does not reach the upper limit of storage, storing each characteristic value into the corresponding value space D j To determine the value range corresponding to each test sample feature.
Specifically, a preset upper storage limit is obtained, and the current value space D is judged j Whether the current value space D reaches a preset upper limit or not, e.g. judging j Whether the storage upper limit of 1000 is reached, when the storage upper limit is in the value space D j When the storage upper limit is not reached, storing each characteristic value into a correspondingly arranged value space D j And according to the value space D j And determining the value range corresponding to each test sample feature.
Step S812, determining the value to be replaced from the value space according to the preset screening requirement when the value space reaches the upper storage limit.
Specifically, when in the value space D j When the storage upper limit is reached, the method can be used for a second time according to the preset screening requirement, such as wipingFrom the value space D j And selecting a value and determining the value as a value to be replaced.
Step S814, obtaining the target feature value of the test sample feature of the feature value to be stored currently, and updating the value to be replaced to the target feature value to determine the value range corresponding to each test sample feature.
Specifically, by acquiring the target feature value of the test sample feature of the feature value to be stored, such as the test sample feature x of the feature value to be stored j And updating the value to be replaced into the characteristic value of the test sample x of the current characteristic to be stored j To take on the value space D according to the updated target characteristic value j A range of values corresponding to each test sample feature is determined.
In one embodiment, the effect of feature value replacement can also be achieved by selecting any two samples and then exchanging the values of the corresponding features of the two samples. The sample value exchange needs to select samples, that is, needs to achieve effective sampling of the samples. And the feature dimension to be calculated is extracted in the determination of the value space, so that the corresponding calculation efficiency is higher.
In one embodiment, after determining the value ranges corresponding to each test sample feature, the value ranges may be determined from feature x j Is a value space D of (2) j In (1) toA feature value x' is selected and the test sample feature x in the test sample is selected j And (3) replacing the characteristic value of the test sample with x' to obtain an updated test sample.
In this embodiment, by acquiring the feature types to which each test sample feature belongs, a corresponding value space is set based on each test sample feature and the feature type to which each test sample feature belongs, and a corresponding upper storage limit is added to the value space. And traversing the characteristics of each test sample, acquiring different characteristic values corresponding to the characteristics of each test sample, and storing the characteristic values into the corresponding value space when the value space does not reach the upper storage limit so as to determine the value range corresponding to the characteristics of each test sample. And when the value space reaches the upper storage limit, determining a value to be replaced from the value space according to a preset screening requirement, acquiring a target feature value of the test sample feature of the current feature value to be stored, and updating the value to be replaced into the target feature value to determine a value range corresponding to each test sample feature. The method for storing the characteristic values by setting the value space is realized, so that the replacement characteristic values are obtained from the value space to be subjected to characteristic value replacement processing, updated test sample characteristics are obtained, secondary test of the label prediction model according to the updated test sample characteristics is achieved, and the prediction effect of the label prediction model is further improved.
In one embodiment, as shown in fig. 9, a game prop recommending method is provided, and referring to fig. 9, the method specifically includes the following steps:
step S901, collecting training sample characteristics, obtaining first characteristic values corresponding to the training sample characteristics, obtaining a training sample characteristic set, and carrying out characteristic splicing on the training sample characteristics according to the collected player characteristics, prop characteristics and interaction characteristics.
Step S902, respectively labeling corresponding sample labels for each training sample feature; sample tags include purchased item tags and non-purchased item tags.
Step S903, training the obtained label prediction model according to the training sample feature set and the sample labels corresponding to the training sample features.
Step S904, collecting test sample characteristics, obtaining second characteristic values corresponding to the test sample characteristics, and performing characteristic splicing on the test sample characteristics according to the collected player characteristics, prop characteristics and interaction characteristics.
In step S905, each test sample feature and the corresponding second feature value are determined as input data of the tag prediction model, and the corresponding first prediction tag is output based on the tag prediction model.
Step S906, randomly selecting a preset amount of test sample characteristics from the test sample characteristics, and replacing the characteristic values of the test sample characteristics to obtain updated test sample characteristics.
Step S907, based on the label prediction model, outputting a second prediction label corresponding to each updated test sample feature.
Step S908, determining a prediction difference value corresponding to each test sample feature based on the first prediction tag and the second prediction tag.
And step S909, performing feature importance calculation processing according to the predicted difference value, and determining feature importance corresponding to each test sample feature.
Step S910, screening the features of each training sample according to the preset feature importance requirements, and determining the target features with feature importance meeting the feature importance requirements.
Step S911, training to obtain a prop recommendation model according to each target feature and the corresponding feature value.
Step S912, when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining a prop recommendation model, wherein the prop recommendation model is obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by the tag prediction model.
Step S913, collecting game progress data, game property attribute data and player attribute data of the current game instance.
Step S914, extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data.
Step S915, determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of a prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game progress.
In the game prop recommending method, when the real-time game progress of the current game instance is detected to meet prop recommending conditions, a prop recommending model is obtained, wherein the prop recommending model is trained by target features meeting the feature importance requirements, and the target features are determined according to the prediction tags output by the tag predicting model. The game process data, the game prop attribute data and the player attribute data of the current game instance are collected, target player characteristics are extracted from the player attribute data, target interaction characteristics are extracted from the game process data, target prop characteristics are extracted from the game prop attribute data, then the target player characteristics, the target prop characteristics and the target interaction characteristics are determined to be input data of a prop recommendation model, game recommendation props are output based on the prop recommendation model, and the game interface of the current game instance is displayed. The method has the advantages that the target characteristics meeting the requirements can be determined based on the feature importance, then the prop recommendation model is obtained through training according to the target characteristics instead of training by adopting all the characteristics, feature dimensions required by the training model are reduced, feature noise is removed, the prediction result of the prop recommendation model obtained through training is improved, the output game recommendation prop meets the actual requirements of the current game process better, the recommendation success rate of the game prop is further improved, and the platform income is increased.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a game prop recommending device for realizing the game prop recommending method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitations in the embodiments of one or more game prop recommending devices provided below may be referred to the limitations of the game prop recommending method hereinabove, and will not be repeated herein.
In one embodiment, as shown in FIG. 10, there is provided a play object recommendation device comprising: prop recommendation model acquisition module 1002, data acquisition module 1004, target feature extraction module 1006, and game recommendation prop display module 1008, wherein:
the prop recommendation model obtaining module 1002 is configured to obtain a prop recommendation model when it is detected that a real-time game process of a current game instance meets a prop recommendation condition, where the prop recommendation model is obtained by training target features meeting a feature importance requirement, and each target feature is determined according to a prediction tag output by the tag prediction model.
The data collection module 1004 is configured to collect game progress data, game prop attribute data, and player attribute data of a current game instance.
The target feature extraction module 1006 is configured to extract target player features from the player attribute data, target interaction features from the game progress data, and target prop features from the game prop attribute data.
And the game recommendation prop display module 1008 is configured to determine the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of a prop recommendation model, output a game recommendation prop based on the prop recommendation model, and display the game recommendation prop on a game interface of the current game progress.
In the game prop recommending device, when the real-time game progress of the current game instance is detected to meet prop recommending conditions, a prop recommending model is obtained, wherein the prop recommending model is trained by target features meeting the feature importance requirements, and the target features are determined according to the prediction tags output by the tag predicting model. The game process data, the game prop attribute data and the player attribute data of the current game instance are collected, target player characteristics are extracted from the player attribute data, target interaction characteristics are extracted from the game process data, target prop characteristics are extracted from the game prop attribute data, further the target player characteristics, the target prop characteristics and the target interaction characteristics are determined to be input data of a prop recommendation model, game recommendation props are output based on the prop recommendation model, and the game recommendation props are displayed on a game interface of the current game process. The method has the advantages that the target characteristics meeting the requirements can be determined based on the feature importance, then the prop recommendation model is obtained through training according to the target characteristics instead of training by adopting all the characteristics, feature dimensions required by the training model are reduced, feature noise is removed, the prediction result of the prop recommendation model obtained through training is improved, the output game recommendation prop meets the actual requirements of the current game process better, the recommendation success rate of the game prop is further improved, and the platform income is increased.
In one embodiment, there is provided a game item recommendation device, further comprising:
the label prediction model training module is used for collecting training sample characteristics and training to obtain a label prediction model according to the training sample characteristics and corresponding sample labels;
the prediction label output module is used for collecting the characteristics of the test samples and outputting prediction labels corresponding to the characteristics of the test samples based on a label prediction model;
the target feature determining module is used for determining target features meeting the feature importance requirements based on the prediction tags;
and the prop recommendation model training module is used for training to obtain a prop recommendation model according to each target characteristic and the corresponding characteristic value.
In one embodiment, the tag prediction model training module is further configured to:
acquiring training sample characteristics, and acquiring first characteristic values corresponding to the training sample characteristics to obtain a training sample characteristic set; the training sample features are obtained by feature splicing according to the collected player features, prop features and interaction features; labeling corresponding sample labels for each training sample feature respectively; the sample labels comprise purchased prop labels and non-purchased prop labels; and training the obtained label prediction model according to the training sample feature set and the sample labels corresponding to the training sample features.
In one embodiment, the predictive tag output module is further configured to:
collecting test sample characteristics, and obtaining second characteristic values corresponding to the test sample characteristics; the test sample features are obtained by feature splicing according to the collected player features, prop features and interaction features; determining the characteristics of each test sample and the corresponding second characteristic value as input data of a label prediction model, and outputting a corresponding first prediction label based on the label prediction model; randomly selecting a preset amount of test sample characteristics from the test sample characteristics, and replacing characteristic values of the test sample characteristics to obtain updated test sample characteristics; and outputting second prediction labels corresponding to the updated test sample characteristics based on the label prediction model.
In one embodiment, the target feature determination module is further configured to:
determining a prediction difference value corresponding to each test sample characteristic based on the first prediction label and the second prediction label; performing feature importance calculation according to the predicted difference value, and determining feature importance corresponding to the features of each test sample; and screening the characteristics of each training sample according to the preset characteristic importance requirements, and determining target characteristics of which the characteristic importance meets the characteristic importance requirements.
In one embodiment, the predictive tag output module is further configured to:
randomly selecting a preset amount of test sample characteristics from the test sample characteristics; determining a value range corresponding to each test sample feature according to the second feature value of each test sample feature; determining replacement feature values corresponding to the test sample features from the value range; and replacing the characteristic values of the test sample characteristics according to the replacement characteristic values to obtain updated test sample characteristics.
In one embodiment, the predictive tag output module is further configured to:
acquiring the feature type of each test sample feature; the feature types include discrete features and continuous features; based on the characteristics of each test sample and the type of the characteristics, respectively setting corresponding value taking spaces, and adding corresponding upper storage limits for the value taking spaces; traversing the characteristics of each test sample to obtain different characteristic values corresponding to the characteristics of each test sample; and when the value space does not reach the upper storage limit, storing the characteristic values into the corresponding value space so as to determine the value range corresponding to the characteristics of each test sample.
In one embodiment, the predictive tag output module is further configured to:
When the value taking space reaches the upper storage limit, determining a value to be replaced from the value taking space according to a preset screening requirement;
and obtaining the target feature value of the test sample feature of the current feature value to be stored, and updating the value to be replaced into the target feature value to determine the value range corresponding to each test sample feature.
In one embodiment, a game prop recommending apparatus is provided, further comprising a prediction accuracy determining module for:
acquiring feature labels marked in advance for each target feature, and determining a preset recommended prop according to the feature labels marked in advance; determining a training loss value according to a preset recommended prop and a game recommended prop; and determining the prediction accuracy of the prop recommendation model based on the training loss value.
The above-described modules in the play object recommendation apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as target characteristics, game progress data, game prop attribute data, player attribute data, target player characteristics, target interaction characteristics, target prop characteristics, prop recommendation models, game recommendation props and the like. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements a game play object recommendation method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information, the player characteristics (including but not limited to user equipment information, user personal information, etc.), and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are all information and data authorized by the user or sufficiently authorized by each party, and the collection, use, and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (11)

1. A method for recommending play items, the method comprising:
when the real-time game progress of the current game instance is detected to meet the prop recommendation condition, obtaining prop recommendation models, wherein the prop recommendation models are obtained by training target features meeting the feature importance requirements, and each target feature is determined according to a prediction tag output by a tag prediction model;
Collecting game progress data, game prop attribute data and player attribute data of a current game instance;
extracting target player characteristics from the player attribute data, extracting target interaction characteristics from the game progress data, and extracting target prop characteristics from the game prop attribute data;
and determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
2. The method of claim 1, wherein training to obtain the prop recommendation model using the target features meeting the feature importance requirements comprises:
collecting training sample characteristics, and training to obtain a label prediction model according to the training sample characteristics and corresponding sample labels;
collecting test sample characteristics, and outputting a prediction label corresponding to each test sample characteristic based on the label prediction model;
determining target features meeting the feature importance requirements based on the predictive labels;
and training to obtain a prop recommendation model according to each target feature and the corresponding feature value.
3. The method according to claim 2, wherein the acquiring training sample features and training to obtain a label prediction model according to each training sample feature and a corresponding sample label includes:
acquiring training sample characteristics, and acquiring first characteristic values corresponding to the training sample characteristics to obtain a training sample characteristic set; the training sample features are obtained by feature splicing according to the collected player features, prop features and interaction features;
labeling corresponding sample labels for the training sample features respectively; the sample labels comprise purchased prop labels and non-purchased prop labels;
and training the obtained label prediction model according to the training sample feature set and the sample labels corresponding to the training sample features.
4. The method of claim 2, wherein the predictive labels include a first predictive label and a second predictive label; the collecting test sample characteristics and outputting predicted labels corresponding to the test sample characteristics based on the label prediction model comprises the following steps:
collecting test sample characteristics, and obtaining second characteristic values corresponding to the test sample characteristics; the test sample features are obtained by feature splicing according to the collected player features, prop features and interaction features;
Determining each test sample feature and a corresponding second feature value as input data of the tag prediction model, and outputting a corresponding first prediction tag based on the tag prediction model;
randomly selecting a preset amount of test sample characteristics from the test sample characteristics, and replacing characteristic values of the test sample characteristics to obtain updated test sample characteristics;
and outputting a second prediction label corresponding to each updated test sample characteristic based on the label prediction model.
5. The method of claim 4, wherein determining, based on each of the predictive labels, a target feature that meets a feature importance requirement comprises:
determining a prediction difference value corresponding to each test sample feature based on the first prediction tag and the second prediction tag;
performing feature importance calculation according to the predicted difference value, and determining feature importance corresponding to each test sample feature;
and screening the characteristics of each training sample according to the preset characteristic importance requirement, and determining target characteristics of which the characteristic importance accords with the characteristic importance requirement.
6. The method of claim 4, wherein randomly selecting a predetermined number of test sample features from the test sample features and performing feature value substitution on the test sample features to obtain updated test sample features, comprises:
randomly selecting a preset amount of test sample characteristics from the test sample characteristics;
determining a value range corresponding to each test sample feature according to a second feature value of each test sample feature;
determining replacement feature values corresponding to the test sample features from the value range;
and replacing the characteristic values of the test sample characteristics according to the replacement characteristic values to obtain updated test sample characteristics.
7. The method of claim 6, wherein determining a value range corresponding to each of the test sample features based on the second feature value for each of the test sample features comprises:
acquiring the feature type of each test sample feature; the feature types include discrete features and continuous features;
based on the characteristics of each test sample and the type of the characteristics, respectively setting corresponding value taking spaces, and adding corresponding upper storage limits for the value taking spaces;
Traversing each test sample feature to obtain different feature values corresponding to each test sample feature;
and when the value taking space does not reach the upper storage limit, storing each characteristic value into the corresponding value taking space so as to determine a value range corresponding to each test sample characteristic.
8. The method of claim 7, wherein the method further comprises:
when the value taking space reaches the upper storage limit, determining a value to be replaced from the value taking space according to a preset screening requirement;
and obtaining a target feature value of the test sample feature of the current feature value to be stored, and updating the value to be replaced into the target feature value to determine a value range corresponding to each test sample feature.
9. A play object recommendation device, the device comprising:
the prop recommendation model acquisition module is used for acquiring a prop recommendation model when detecting that the real-time game progress of the current game instance meets prop recommendation conditions, wherein the prop recommendation model is obtained by training target features meeting feature importance requirements, and each target feature is determined according to a prediction tag output by the tag prediction model;
The data acquisition module is used for acquiring game progress data, game prop attribute data and player attribute data of the current game instance;
the target feature extraction module is used for extracting target player features from the player attribute data, extracting target interaction features from the game progress data and extracting target prop features from the game prop attribute data;
and the game recommendation prop display module is used for determining the target player characteristics, the target prop characteristics and the target interaction characteristics as input data of the prop recommendation model, outputting game recommendation props based on the prop recommendation model, and displaying the game recommendation props on a game interface of the current game instance.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202210230129.6A 2022-03-09 2022-03-09 Game prop recommending method, game prop recommending device, computer equipment and storage medium Pending CN116764236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210230129.6A CN116764236A (en) 2022-03-09 2022-03-09 Game prop recommending method, game prop recommending device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210230129.6A CN116764236A (en) 2022-03-09 2022-03-09 Game prop recommending method, game prop recommending device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116764236A true CN116764236A (en) 2023-09-19

Family

ID=87986535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210230129.6A Pending CN116764236A (en) 2022-03-09 2022-03-09 Game prop recommending method, game prop recommending device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116764236A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112911A (en) * 2023-10-22 2023-11-24 成都亚度克升科技有限公司 Software recommendation method and system based on big data analysis and artificial intelligence
CN117942577A (en) * 2024-02-20 2024-04-30 北京优路互娱科技有限公司 Method for predicting and optimizing player behavior by using artificial intelligence

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112911A (en) * 2023-10-22 2023-11-24 成都亚度克升科技有限公司 Software recommendation method and system based on big data analysis and artificial intelligence
CN117112911B (en) * 2023-10-22 2024-03-22 成都亚度克升科技有限公司 Software recommendation method and system based on big data analysis and artificial intelligence
CN117942577A (en) * 2024-02-20 2024-04-30 北京优路互娱科技有限公司 Method for predicting and optimizing player behavior by using artificial intelligence

Similar Documents

Publication Publication Date Title
CN113508378B (en) Training method, recommendation method, device and computer readable medium for recommendation model
CN111784455B (en) Article recommendation method and recommendation equipment
EP4181026A1 (en) Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
CN111898031B (en) Method and device for obtaining user portrait
CN110046304A (en) A kind of user's recommended method and device
CN112487278A (en) Training method of recommendation model, and method and device for predicting selection probability
CN111046294A (en) Click rate prediction method, recommendation method, model, device and equipment
CN110008397B (en) Recommendation model training method and device
CN111339415A (en) Click rate prediction method and device based on multi-interactive attention network
CN109582876A (en) Tourism industry user portrait building method, device and computer equipment
CN113191838B (en) Shopping recommendation method and system based on heterogeneous graph neural network
CN108305181B (en) Social influence determination method and device, information delivery method and device, equipment and storage medium
CN116764236A (en) Game prop recommending method, game prop recommending device, computer equipment and storage medium
CN111680213B (en) Information recommendation method, data processing method and device
Ren et al. A co-attention based multi-modal fusion network for review helpfulness prediction
CN114529399A (en) User data processing method, device, computer equipment and storage medium
CN115471260A (en) Neural network-based sales prediction method, apparatus, device and medium
McIlwraith Algorithms of the intelligent web
CN112488355A (en) Method and device for predicting user rating based on graph neural network
CN114254070A (en) Question recommendation method and device
CN118332194B (en) Cross-domain cold start recommendation method, device, equipment and storage medium
CN116628236B (en) Method and device for delivering multimedia information, electronic equipment and storage medium
CN117786234B (en) Multimode resource recommendation method based on two-stage comparison learning
CN115689648B (en) User information processing method and system applied to directional delivery
CN118822678A (en) Shop evaluation method, recommendation method and related devices based on shop prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40094465

Country of ref document: HK