CN109966743A - A kind of prediction technique, model generating method and the device of game winning rate - Google Patents

A kind of prediction technique, model generating method and the device of game winning rate Download PDF

Info

Publication number
CN109966743A
CN109966743A CN201910168760.6A CN201910168760A CN109966743A CN 109966743 A CN109966743 A CN 109966743A CN 201910168760 A CN201910168760 A CN 201910168760A CN 109966743 A CN109966743 A CN 109966743A
Authority
CN
China
Prior art keywords
winning rate
prediction model
training
game
rate prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910168760.6A
Other languages
Chinese (zh)
Inventor
蔡康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910168760.6A priority Critical patent/CN109966743A/en
Publication of CN109966743A publication Critical patent/CN109966743A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/798Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the present application provides a kind of prediction technique of game winning rate, model generating method and device, by obtaining real-time game camp data, wherein, game camp data include the battle array combination in camp, game each side before game beginning, then game camp data are inputted into preset winning rate prediction model, the winning rate prediction model has multiple output nodes, then multiple prediction results of the multiple output node output of the winning rate prediction model are obtained, and show multiple prediction results of the multiple output node output, to pass through the game camp data of game participant before acquisition beginning, and carry out winning rate prediction, the game experiencing of user can be improved, when being watched or live game coverage, before game beginning, the winning rate of game participant each side is predicted, it is more preferably geographical to facilitate spectator Battlefield situation is solved, the interest watched is improved.

Description

A kind of prediction technique, model generating method and the device of game winning rate
Technical field
This application involves game technical fields, prediction technique and a kind of game winning rate more particularly to a kind of game winning rate Prediction meanss.
Background technique
By the promotion of artificial intelligence (Artificial Intelligence, AI) development wave, game AI in recent years It enters on the expressway of development, either the openAI of the alpha go, DOTA2 of go, is all weight of the AI in field of play Quantum jump has overturned cognition of the previous people to the AI limit.
Game itself can have one than more complete mapping relations with real world, toy problem be solved, in certain journey Its ability for solving the problems, such as real world is shown on degree, thus the development of game AI the development of human society is also played it is important , positive effect.Itself for game AI, the concept that it includes be it is very extensive, from the AI of auxiliary tool property, than Such as equip recommendation, technical ability add some points recommendation, store virtual objects recommend, to the intelligent AI fought with player, AI is just in all directions Increase game experiencing, wherein game victory or defeat Predicting Technique is exactly an important function therein.Specifically, game winning rate Prediction can be described as: under going game world state, to the camp game Zhong Ge obtain game final victory a possibility that, i.e., The probability won the victory.It is another by the one hand can be used as the auxiliary tool that game is watched with race to the prediction of game winning rate Aspect can be used as the situation judgement part that AI is fought with player.However, current, there is no a kind of only by player in game The game battle array of middle choosing carrys out the game winning rate of forecasting game participant.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present application and overcomes the above problem or at least partly in order to provide one kind A kind of prediction technique of the game winning rate to solve the above problems and a kind of corresponding prediction meanss of game winning rate.
To solve the above-mentioned problems, the embodiment of the present application discloses a kind of prediction technique of game winning rate, comprising:
Real-time game camp data are obtained, game camp data include the battle array in camp, game each side before game beginning Hold combination;
Game camp data are inputted into preset winning rate prediction model, the winning rate prediction model has multiple outputs Node;
Obtain the multiple output node output of the winning rate prediction model with game camp Data Matching Multiple prediction results;
Show multiple prediction results of the multiple output node output.
It is optionally, described that game camp data are inputted into preset winning rate prediction model, comprising:
Game camp data are subjected to vectorization, obtain first camp's feature vector;
First camp feature vector is subjected to characteristic crossover processing, generates predicted characteristics vector;
The predicted characteristics vector is inputted into the winning rate prediction model.
Optionally, the winning rate prediction model generates in the following way:
Obtaining number of training, winning rate prediction model, the training sample data include being used for training pattern accordingly and initially Game camp data;
Training feature vector is generated using the training sample data;
It is trained using the training feature vector and the initial winning rate prediction model.
It is optionally, described to generate training feature vector using the training sample data, comprising:
The game camp data for being used for training pattern are subjected to vectorization, generate second camp's feature vector;
Second camp feature vector is subjected to characteristic crossover processing, generates the training feature vector.
It is optionally, described to be trained using the training feature vector and the initial winning rate prediction model, comprising:
The training feature vector is inputted into the initial winning rate prediction model, obtains output result;
According to the output as a result, calculating the loss function of the initial winning rate prediction model, generate and the loss letter The corresponding gradient value of number;
According to the gradient value, the training initial winning rate prediction model;
When the gradient value minimizes, initial winning rate prediction model described in deconditioning.
Optionally, the Softmax layer that the winning rate prediction model has input layer and connect with the input layer, it is described Softmax layers connect with the multiple output node respectively;The input layer has multiple input nodes;It is Softmax layers described For converting the output result of the input layer, and the output result after conversion is separately input to the multiple output Node.
Optionally, described that the cross feature vector is inputted into the initial winning rate prediction model, it is exported as a result, wrapping It includes:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
Optionally, the training sample data further include with described for training the game camp data of winning rate prediction model Matched victory or defeat label, it is described according to the output as a result, calculate the loss function of the initial winning rate prediction model, generate with The corresponding gradient value of the loss function, comprising:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss is calculated Function generates multiple gradient values.
It is optionally, described according to the gradient value, the training initial winning rate prediction model, comprising:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, the parameter of the activation primitive of each neuron of the input layer is then updated according to the multiple gradient value, Continue to train the timesharing prediction model.
The embodiment of the present application also discloses a kind of generation method of winning rate prediction model, including
Obtaining number of training, winning rate prediction model, the training sample data include being used for training pattern accordingly and initially Game camp data;
Training feature vector is generated using the training sample data;
It is trained using the training feature vector and the initial winning rate prediction model.
It is optionally, described to generate training feature vector using the training sample data, comprising:
Game camp data are subjected to vectorization, generate camp's feature vector;
Camp's feature vector is subjected to characteristic crossover processing, generates the training feature vector.
It is optionally, described to be trained using the training feature vector and the initial winning rate prediction model, comprising:
The training feature vector is inputted into the initial winning rate prediction model, obtains output result;
According to the output as a result, calculating the loss function of the initial winning rate prediction model, generate and the loss letter The corresponding gradient value of number;
According to the gradient value, the training initial winning rate prediction model;
When the gradient value minimizes, initial winning rate prediction model described in deconditioning.
Optionally, the Softmax layer that the winning rate prediction model has input layer and connect with the input layer, it is described Softmax layers connect with the multiple output node respectively;The input layer has multiple input nodes;It is Softmax layers described For converting the output result of the input layer, and the output result after conversion is separately input to the multiple output Node.
Optionally, described that the cross feature vector is inputted into the initial winning rate prediction model, it is exported as a result, wrapping It includes:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
Optionally, the training sample data further include the victory with the game camp Data Matching for training pattern Negative label, it is described to export the loss function as a result, the calculating initial winning rate prediction model according to described, it generates and the loss The corresponding gradient value of function, comprising:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss is calculated Function generates multiple gradient values.
It is optionally, described according to the gradient value, the training initial winning rate prediction model, comprising:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, the parameter of the activation primitive of each neuron of the input layer is then updated according to the multiple gradient value, Continue to train the timesharing prediction model.
Optionally, the method also includes:
It obtains verifying sample data and obtains the winning rate prediction model after multiple training, wherein the verifying sample number According to including game camp data for verifying model;
Verifying feature vector is generated using the verifying sample data;
The verifying feature vector is inputted into the winning rate prediction model after multiple training and carries out cross validation, and calculates verifying Multiple validation error values of winning rate prediction model afterwards;
According to the multiple validation error value, target winning rate prediction model is determined.
Optionally, described according to the multiple validation error value, determine target winning rate prediction model, comprising:
Judge whether the multiple validation error value meets default error threshold;
If so, the winning rate prediction model of the default error threshold will be met as the target winning rate prediction model.
The embodiment of the present application also discloses a kind of prediction meanss of game winning rate, comprising:
Characteristic obtains module, and for obtaining real-time game camp data, game camp data include game The battle array combination in camp, game each side before beginning;
Characteristic prediction module, for game camp data to be inputted preset winning rate prediction model, the victory Rate prediction model has multiple output nodes;
Prediction result obtain module, for obtain the winning rate prediction model the multiple output node export with institute State multiple prediction results of game camp Data Matching;
Prediction result display module, for showing multiple prediction results of the multiple output node output.
The embodiment of the present application also discloses a kind of training device of game winning rate model, including
Training sample data obtain module, for obtaining number of training accordingly and initially winning rate prediction model, the instruction Practicing sample data includes the game camp data for training pattern;
Training feature vector generation module, for generating training feature vector using the training sample data;
Model training module, for being instructed using the training feature vector and the initial winning rate prediction model Practice.
The embodiment of the present application also discloses a kind of device, comprising:
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, are executed when by one or more of processors When, so that described device executes the prediction technique of as above one or more game winning rates, or as described above one or more The generation method of winning rate prediction model.
The embodiment of the present application also discloses one or more machine readable medias, is stored thereon with instruction, when by one or When multiple processors execute, so that described device executes the prediction technique of one or more game winning rate as described above, or The generation method of one or more winning rate prediction model as described above.
The embodiment of the present application includes the following advantages:
In the embodiment of the present application, pass through and obtain real-time game camp data, wherein game camp data include Game camp data are then inputted preset winning rate and predict mould by the battle array combination in camp, game each side before game beginning Type, the winning rate prediction model have multiple output nodes, then obtain the multiple output section of the winning rate prediction model Multiple prediction results of point output, and show multiple prediction results of the multiple output node output, to be opened by obtaining The game camp data of game participant before office, and winning rate prediction is carried out, the game experiencing of user can be improved, watching Or when live game coverage, before game beginning, the winning rate of game participant each side is predicted, it is more preferable to facilitate spectator Ground understands battlefield situation, improves the interest watched.
Meanwhile during model training, hot coding characteristic vector is converted to cross feature vector, improves game The relevance of camp's data, and then improve the accuracy of game winning rate prediction.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of the prediction technique embodiment of game winning rate of the application;
Fig. 2 is a kind of step flow chart of the generation method embodiment one of winning rate prediction model of the application;
Fig. 3 is a kind of step flow chart of the generation method embodiment two of winning rate prediction model of the application;
Fig. 4 is a kind of structural block diagram of the prediction meanss embodiment of game winning rate of the application;
Fig. 5 is a kind of structural block diagram of the generating means embodiment of winning rate prediction model of the application.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing and it is specific real Applying mode, the present application will be further described in detail.
Referring to Fig.1, a kind of step flow chart of the prediction technique embodiment of game winning rate of the application is shown, specifically may be used To include the following steps:
Step 101, real-time game camp data are obtained, game camp data include camp, game each side before game beginning Battle array combination;
For some electric athletic games, player is typically divided into several troops, is competed with one another for.Wherein, The configuration of each side's game battle array plays a crucial role final winning rate.
In order to facilitate describing and understand, the embodiment of the present invention is with the online tactics competitive game (Multiplayer of more people Online Battle Arena, MOBA) for be illustrated.In MOBA network game field, player is typically divided into two Team, two teams compete with one another in the map of dispersion, in map, other than the virtual heroic role of player both sides' selection, There are also dogface, defence tower, small wild strange, special wild non-players' manipulation such as strange game unit (Non-Player Character, NPC), each player passes through the selected virtual heroic role of control and kills enemy hero or middle cubic units on map, obtains Resource is taken, it is final to destroy enemy base, obtain final victory.
Before game beginning, player needs that a virtual heroic object is selected to participate in game in heroic pond, same In camp, player can only select different virtual heroic objects, and in different camps, player can choose identical virtual hero Object can form corresponding game camp after the player in each participant camp has selected corresponding virtual heroic object Data, wherein game camp data include the battle array combination that player selects in each participant camp, virtual in the camp Ji Ge Heroic object composition.
It is combined by the battle array that selected to camp, going game each side, carries out winning rate prediction, and choose our winning rate most High battle array combination, to player, a selection reference can be provided for player, improves player's as recommendation battle array combined recommendation Game experiencing.
For example, participant camp includes white side and black in certain moba game, every side selects 5 virtual heroes right As heroic Chi Zhongyou 55 virtual heroic objects, wherein the battle array group of player's selection is combined into hero 1, hero in white square matrix battalion 11, hero 21, hero 31, hero 41, and in black camp player select battle array group be combined into hero 1, hero 11, hero 25, Hero 41, hero 45, then game camp data may include the battle array combination that white side and two square matrix of black seek selection.
In the concrete realization, before game beginning, the player in game both sides camp needs to select from virtual heroic pond empty Quasi- hero's object can then form the game camp data in each camp after player's selection.
In a kind of example of the embodiment of the present application, it is assumed that the different virtual heroic object of virtual hero Chi Zhongyou 55, Wherein, the type of virtual heroic object may include magic hero, physics hero, auxiliary hero, shooting hero and English of advancing by leaps and bounds Hero etc., the player in camp, game each side can be with the virtual heroic objects of unrestricted choice, and the player in same camp can only select different Virtual hero's object, after player's selected virtual heroic object, the battle array combination in camp, available game each side can pass through The virtual heroic object of player's selection, carries out winning rate prediction, and recommend battle array to arrange in pairs or groups for player, to improve the game body of player It tests.
It should be noted that the real-time winning rate prediction of the embodiment of the present invention can also be applied to other game winning rate predictions In scene, such as action game, policy-simulative class, real time strategy, the invention is not limited in this regard.
Step 102, game camp data are inputted into preset winning rate prediction model, winning rate prediction model has multiple outputs Node;
In the embodiment of the present application, winning rate prediction model is wide neural network model.Game camp data are subjected to vector Change, obtain game camp feature vector, game camp feature vector is then inputted into winning rate prediction model.
In a kind of preferred embodiment of the embodiment of the present application, step 102 may include following sub-step:
Game camp data are carried out vectorization, obtain predicted characteristics vector by sub-step S11;
In the concrete realization, before game beginning, after all players selected virtual heroic object, game can be obtained The complete battle array information of participant.Wherein, game camp data are handled by the way of heat coding, in camp, each side Battle array in the corresponding flag bit of virtual hero that occurs be set as 1, the mark not occurred is to be set as 0, then virtual hero in game Virtual hero's object is N number of in pond, and a square matrix has M player, and a square matrix, which holds feature, can indicate that dimension is N, wherein M dimension Angle value is 1, remaining N-M dimension values is 0, the N-dimensional feature of all participants of game can be spliced, and it is special to generate camp Levy vector.
Camp's feature vector is carried out characteristic crossover processing, generates predicted characteristics vector by sub-step S12;
Predicted characteristics vector is inputted winning rate prediction model by sub-step S13.
It in the concrete realization, can after the N-dimensional feature of all participants of game being carried out splicing generation predicted characteristics vector Feature vector progress characteristic crossover processing in camp's then to be inputted predicted characteristics vector to generate predicted characteristics vector Winning rate prediction model carries out winning rate prediction.
Specifically, characteristic crossover processing can be carried out using N-dimensional feature of the following crossing formula to all participants of game.
Y=NN
Wherein, Y indicates the dimension of characteristic crossover treated predicted characteristics vector, and N indicates the spy of all participants of game Levy vector dimension.
In a kind of example of the embodiment of the present application, it is assumed that virtual hero Chi Zhongyou 55 virtual heroic objects in game, Game participant includes white square matrix battalion and black camp, when the virtual hero occurs in battle array, dimension values 1, when the void When quasi- hero does not occur in battle array, dimension values 0, going game participant is two sides, and the battle array of square matrix battalion has 5 objects for appreciation Family, wherein 5 dimension values are 1, remaining 50 dimension values is 0, the feature vector of available two Fang Yire coding form, will The feature vector in both sides camp is spliced the feature vector for the hot coding form for obtaining 110 dimensions, then can be by the heat of 110 dimensions The feature vector of coding form carries out characteristic crossover processing, generates cross feature vector, totally 12100 dimension.
In the embodiment of the present application, winning rate prediction model can generate in the following way:
Step S1, obtaining number of training, winning rate prediction model, training sample data include for training accordingly and initially The game camp data of model;
In the embodiment of the present application, training sample data may include game camp data and with game camp data pair The victory or defeat label answered.Specifically, game camp data include the complete battle array combination of game participant camp, victory or defeat label can be with It can be 1 or 0 using the triumph of a side as label, in sample data 1: the white side of battle array combination (1003,1010,1012, 1007,1016), black (1017,1021,1011,1039,1015), victory or defeat label are 1;In sample data 2: battle array combination is white Side (1016,1052,1031,1001,1012), black (1023,1028,1031,1013,1046) victory or defeat label 0;Sample data 3: battle array combines white side (1046,1045,1019,1037,1018), black (1025,1006,1027,1029,1031) victory or defeat mark Label 0 etc., wherein unique ID of the corresponding virtual heroic object of number in battle array information, with white Fang Shengli, the victory of black failure Negative label is 1, with white side failure, the label of black triumphantly for 0.
It should be noted that be in the embodiment of the present application that two sides illustrate with game participant, Ke Yili Solution, game participant can be more than or equal to two sides, the application to this with no restriction.
Step S2 generates training feature vector using training sample data;
In the embodiment of the present application, it after obtaining training sample data, will can be used to train in training sample data The game camp data of model are spliced together according to unique ID, generate training characteristics data, and carry out to training characteristics data Vectorization, to generate the training feature vector of training sample data.
In a kind of preferred embodiment of the embodiment of the present application, step S2 may include following sub-step:
The game camp data for being used for training pattern are carried out vectorization, generate second camp's feature by sub-step S21 Vector;
Sub-step S22, by second camp feature vector carry out characteristic crossover processing, generate the training characteristics to Amount.
In the concrete realization, game camp data are handled by the way of heat coding, in the battle array of square matrix battalion The corresponding flag bit of virtual hero of middle appearance is set as 1, and the mark not occurred is to be set as 0, then virtual hero Chi Zhongxu in game Quasi- hero's object be it is N number of, a square matrix has M player, and a square matrix, which holds feature, can indicate dimension as N, and wherein M dimension values are 1, remaining N-M dimension values be 0, the N-dimensional feature of all participants of game can be spliced, generate camp's feature to Amount then can carry out characteristic crossover processing to camp's feature vector, camp's feature vector of hot coding characteristic form is converted Width can be improved by the way that original hot coding characteristic is converted to cross feature for the training feature vector of cross feature form The relevance of primitive character in neural network model, and then improve the accuracy of game winning rate prediction.
For example, the feature f=[1,0,1] three-dimensional for one, three-dimensional respectively corresponds a, and whether the appearance of b, c, 1 is represented Existing, 0 indicates do not occur, then the three-dimensional feature indicates that a and c occurs, and b does not occur.Cross feature can be expressed as f × f, i.e., [1, 0,1,0,0,0,1,0,1] totally 9 dimension, 1 expression a and a in 9 dimension occur simultaneously (i.e. a appearance), a and c at the same occur, c and c it is same When there is (c appearances), by carrying out characteristic crossover processing, the additional information of mostly a and c appearance simultaneously, to improve original The relevance of beginning feature.
Step S3 is trained using cross feature vector and initial winning rate prediction model.
In the embodiment of the present application, by training sample data, obtain game camp data and with game camp data It, can be using game camp data, victory or defeat label corresponding with game camp data and initial victory after corresponding victory or defeat label Rate prediction model is trained, and calculates the loss function of initial winning rate prediction model, predicts mould to winning rate by loss function Type is exercised supervision and is instructed.
In the concrete realization, trained stop condition, which can be set, is: the loss function of winning rate prediction model minimizes.When When the loss function of winning rate prediction model minimizes, deconditioning winning rate prediction model.
In the embodiment of the present application, the Softmax that winning rate prediction model has input layer and connect with the input layer Layer, described Softmax layers connect with the multiple output node respectively;The input layer has multiple input nodes;It is described Softmax layers are used to convert the output result of the input layer, and the output result after conversion are separately input to institute State multiple output nodes.
Wherein, Softmax layers of output node is configured according to the number of game participant, if game participant is two Side, then the number of output node is 2;Game participant tripartite, then the number of output node be 3, the embodiment of the present invention to this simultaneously It does not limit.
In a kind of preferred embodiment of the embodiment of the present application, step S3 may include following sub-step:
Training feature vector is inputted initial winning rate prediction model by sub-step S31, obtains output result;
Sub-step S32, according to output as a result, calculating the loss function of initial winning rate prediction model, generation and loss function Corresponding gradient value;
In neural network model, the neuron of input layer, articulamentum and output layer is all the function of possessing activation primitive Neuron, functional neurosurgery member can be handled received signal.
In the concrete realization, by the activation primitive of each neuron of input layer, to characteristic crossover, treated that training is special Sign vector is mapped, to obtain the output of input layer as a result, the output result of input layer is then transmitted to Softmax Layer.
In the embodiment of the present application, training sample data can also include the victory or defeat label with game camp Data Matching, It, can be by victory or defeat label reverse propagated error, thus to the model parameter of winning rate prediction model during model training It is adjusted.
In the concrete realization, the output results that input layer can be used by Softmax layers and corresponding victory or defeat label, Loss function is calculated, and is generated and the output matched multiple gradient values of result.
Such as, battle array data 1 are inputted into winning rate prediction model, available output corresponding with battle array data 1 is as a result, then Loss function can be calculated using the victory or defeat label in output result and battle array data 1, and is generated and the output matched ladder of result Angle value, by calculating several battle array data, available multiple gradient values.
Sub-step S33, according to gradient value, the initial winning rate prediction model of training;
Sub-step S34, when gradient value minimizes, the initial winning rate prediction model of deconditioning.
In the concrete realization, judge whether multiple gradient values meet preset threshold condition by output node, if it is not, then root The parameter that the activation primitive of each neuron of input layer is updated according to multiple gradient values continues to train winning rate prediction model, if so, Then indicate that the prediction effect of winning rate prediction model has reached desired value, it can be with deconditioning winning rate prediction model.
When carrying out model training, the operating process more than executing is recycled, until reaching preset stop condition.Its In, the parameter of activation primitive is updated, can be and strategy is declined based on gradient, parameter is carried out more with the negative gradient direction of target Newly.In the concrete realization, a learning rate can be preset, the update step-length of parameter in control each round training.
Step 103, multiple prediction results of multiple output nodes output of winning rate prediction model are obtained;
Step 104, multiple prediction results of multiple output node outputs are shown.
In embodiments of the present invention, the result vector of the output of input layer can be normalized by Softmax layers, Corresponding probability value is obtained, then exports probability value by multiple output nodes.
Wherein, the output node number of winning rate prediction model is corresponding with the quantity of game participant, when input winning rate prediction The game camp data of model include the game camp data of two square matrixes battalion, then output node is 2;When input winning rate predicts mould The game camp data of type include the game camp data in tripartite camp, then output node is 3;When input winning rate prediction model Game camp data include square array battalion game camp data, then output node be 4 ... in the concrete realization, Ke Yigen Be configured according to actual needs, the application to this with no restriction.
In the concrete realization, data input winning rate prediction model in game camp is handled by input layer, then Softmax layers are converted the result vector of input layer output, obtain corresponding probability value, i.e. game participant is won general Then rate is shown multiple prediction results.
In the embodiment of the present application, pass through and obtain real-time game camp data, wherein game camp data include Game camp data are then inputted preset winning rate and predict mould by the battle array combination in camp, game each side before game beginning Type, the winning rate prediction model have multiple output nodes, then obtain the multiple output section of the winning rate prediction model Multiple prediction results of point output, and show multiple prediction results of the multiple output node output, to be opened by obtaining The game camp data of game participant before office, and winning rate prediction is carried out, the game experiencing of user can be improved, watching Or when live game coverage, before game beginning, the winning rate of game participant each side is predicted, it is more preferable to facilitate spectator Ground understands battlefield situation, improves the interest watched.
Referring to Fig. 2, a kind of step flow chart of the generation method embodiment one of winning rate prediction model of the application is shown, It can specifically include following steps:
Step 201, winning rate prediction model, training sample data include for training to acquisition number of training accordingly and initially The game camp data of model;
In the embodiment of the present application, training sample data may include game camp data and with game camp data pair The victory or defeat label answered.Specifically, game camp data include the complete battle array combination of game participant camp, victory or defeat label can be with It can be 1 or 0 using the triumph of a side as label, in sample data 1: the white side of battle array combination (1003,1010,1012, 1007,1016), black (1017,1021,1011,1039,1015), victory or defeat label are 1;In sample data 2: battle array combination is white Side (1016,1052,1031,1001,1012), black (1023,1028,1031,1013,1046) victory or defeat label 0;Sample data 3: battle array combines white side (1046,1045,1019,1037,1018), black (1025,1006,1027,1029,1031) victory or defeat mark Label 0 etc., wherein unique ID of the corresponding virtual heroic object of number in battle array combination, with white Fang Shengli, the victory of black failure Negative label is 1, with white side failure, the label of black triumphantly for 0.
Step 202, training feature vector is generated using training sample data;
In the embodiment of the present application, after obtaining training sample data, training sample data middle reaches can be played and is participated in Side game camp data be spliced together according to unique ID, generate training characteristics data, and to training characteristics data carry out to Quantization, to generate the training feature vector of training sample data.
In a kind of preferred embodiment of the embodiment of the present application, step 202 may include following sub-step:
Game camp data are carried out vectorization, generate camp's feature vector by sub-step S41;
Camp's feature vector is carried out characteristic crossover processing, generates the training feature vector by sub-step S42.
In the concrete realization, game camp data are handled by the way of heat coding, in the battle array of square matrix battalion The corresponding flag bit of virtual hero of middle appearance is set as 1, and the mark not occurred is to be set as 0, then virtual hero Chi Zhongxu in game Quasi- hero's object be it is N number of, a square matrix has M player, and a square matrix, which holds feature, can indicate dimension as N, and wherein M dimension values are 1, remaining N-M dimension values be 0, the N-dimensional feature of all participants of game can be spliced, generate camp's feature to Amount then can carry out characteristic crossover processing to camp's feature vector, camp's feature vector of hot coding characteristic form is converted Width can be improved by the way that original hot coding characteristic is converted to cross feature for the training feature vector of cross feature form The relevance of primitive character in neural network model, and then improve the accuracy of game winning rate prediction.
Step 203, it is trained using training feature vector and initial winning rate prediction model.
In the embodiment of the present application, by training sample data, obtain game camp data and with game camp data It, can be using game camp data, victory or defeat label corresponding with game camp data and initial victory after corresponding victory or defeat label Rate prediction model is trained, and calculates the loss function of initial winning rate prediction model, predicts mould to winning rate by loss function Type is exercised supervision and is instructed.
In the concrete realization, trained stop condition, which can be set, is: the loss function of winning rate prediction model minimizes.When When the loss function of winning rate prediction model minimizes, deconditioning winning rate prediction model.
In the embodiment of the present application, the Softmax that winning rate prediction model has input layer and connect with the input layer Layer, described Softmax layers connect with the multiple output node respectively;The input layer has multiple input nodes;It is described Softmax layers are used to convert the output result of the input layer, and the output result after conversion are separately input to institute State multiple output nodes.
Wherein, Softmax layers of output node is configured according to the number of game participant, if game participant is two Side, then the number of output node is 2;Game participant tripartite, then the number of output node be 3, the embodiment of the present invention to this simultaneously It does not limit.
In a kind of preferred embodiment of the embodiment of the present application, step 203 may include following sub-step:
Training feature vector is inputted initial winning rate prediction model by sub-step S51, obtains output result;
Sub-step S52, according to output as a result, calculating the loss function of initial winning rate prediction model, generation and loss function Corresponding gradient value;
In neural network model, the neuron of input layer, articulamentum and output layer is all the function of possessing activation primitive Neuron, functional neurosurgery member can be handled received signal.
In the concrete realization, by the activation primitive of each neuron of input layer, to characteristic crossover, treated that training is special Sign vector is mapped, to obtain the output of input layer as a result, the output result of input layer is then transmitted to Softmax Layer.
In the embodiment of the present application, training sample data can also include the victory or defeat label with game camp Data Matching, It, can be by victory or defeat label reverse propagated error, thus to the model parameter of winning rate prediction model during model training It is adjusted.
In the concrete realization, the output results that input layer can be used by Softmax layers and corresponding victory or defeat label, Loss function is calculated, and is generated and the output matched multiple gradient values of result.
Sub-step S53, according to gradient value, the initial winning rate prediction model of training;
Sub-step S54, when gradient value minimizes, the initial winning rate prediction model of deconditioning.
In the concrete realization, judge whether multiple gradient values meet preset threshold condition by output node, if it is not, then root The parameter that the activation primitive of each neuron of input layer is updated according to multiple gradient values continues to train winning rate prediction model, if so, Then indicate that the prediction effect of winning rate prediction model has reached desired value, it can be with deconditioning winning rate prediction model.
When carrying out model training, the operating process more than executing is recycled, until reaching preset stop condition.Its In, the parameter of activation primitive is updated, can be and strategy is declined based on gradient, parameter is carried out more with the negative gradient direction of target Newly.In the concrete realization, a learning rate can be preset, the update step-length of parameter in control each round training.
In embodiments of the present invention, by obtaining number of training accordingly and initially winning rate prediction model, number of training According to including game camp data for training pattern, training feature vector is generated using training sample data, it is special using training Sign vector and initial winning rate prediction model are trained, during training, by by the feature vector of hot coding form Characteristic crossover is carried out, cross feature vector is obtained, cross feature vector is then inputted into initial winning rate prediction model and carries out model Training, to improve the relevance of primitive character, and then improves the accuracy that model predicts game winning rate.
With reference to Fig. 3, a kind of step flow chart of the generation method embodiment two of winning rate prediction model of the application is shown, It can specifically include following steps:
Step 301, winning rate prediction model, training sample data include for training to acquisition number of training accordingly and initially The game camp data of model;
In the concrete realization, in the preparation stage of model training, an available initial winning rate prediction model, to use sample Notebook data is trained initial timesharing prediction model, generates suitable winning rate prediction model.
Step 302, training feature vector is generated using training sample data;
In the concrete realization, the game camp data of training sample data middle reaches play participant can be spelled according to unique ID It is connected to together, generates training characteristics data, and vectorization is carried out to training characteristics data, generate camp's feature vector, it is then right Camp's feature vector carries out characteristic crossover processing, and original hot coding characteristic is converted to cross feature, and wide nerve can be improved The relevance of primitive character in network model, and then improve the accuracy of game winning rate prediction.
Step 303, it is trained using training feature vector and initial winning rate prediction model;
In the concrete realization, by training sample data, game camp data and corresponding with game camp data are obtained Victory or defeat label after, can using game camp data, victory or defeat label corresponding with game camp data and initial winning rate it is pre- Survey model to be trained, and calculate the loss function of initial winning rate prediction model, by loss function to winning rate prediction model into Row supervision and guidance.
In the concrete realization, trained stop condition, which can be set, is: the loss function of winning rate prediction model minimizes.When When the loss function of winning rate prediction model minimizes, deconditioning winning rate prediction model.
Step 304, it obtains verifying sample data and obtains the winning rate prediction model after multiple training;
In embodiments of the present invention, verifying sample data includes the game camp data for verifying model.
In the concrete realization, the winning rate prediction model after the Qualify Phase of model training, available multiple training, with The winning rate prediction model after multiple training is verified using verifying sample data, it is optimal so as to choose prediction effect Winning rate prediction model.
Step 305, verifying feature vector is generated using verifying sample data;
In the concrete realization, after obtaining verifying sample data, it can will verify game participant in sample data and use In winning rate prediction characteristic be spliced together according to unique ID, generate verifying characteristic, and to verifying characteristic into Row vector, to generate the verifying eigenvector information of verifying sample data.
Step 306, verifying feature vector is inputted into the winning rate prediction model after multiple training and carries out cross validation, and calculated Multiple validation error values of winning rate prediction model after verifying;
In embodiments of the present invention, after the verifying feature vector for generating verifying sample data, feature vector will can be verified Winning rate prediction model after inputting multiple training carries out K and rolls over cross validation, and calculates the multiple of the winning rate prediction model after verifying Then validation error value is adjusted, until achieving the desired results according to hyper parameter of the validation error value to model.
In the concrete realization, verifying sample data is divided into K parts, K folding-is carried out to the winning rate prediction model after multiple training Cross validation, and calculate multiple validation error values of the winning rate prediction model after verifying.Mould is predicted to winning rate by cross validation The hyper parameter of type optimizes adjustment, wherein hyper parameter adjustment may include normalized, weights initialisation processing, Dropout, Batch Size, BatchNormalization and Regularization etc., thus to winning rate prediction model It is adjusted optimization, improves the winning rate predictablity rate of winning rate prediction model.
In a kind of example of the embodiment of the present invention, 10 foldings-cross validation can be used, verifying sample data is divided into 10 Part, respectively to after training model per minute and last minute model carry out 10 foldings-cross validation, every point after being verified Multiple validation error values of clock model and multiple validation error values of last minute model.
Step 307, according to multiple validation error values, target winning rate prediction model is determined.
In embodiments of the present invention, by K folding-cross validation, the folding sample data of the K-1 in sample data will be verified and carried out Model training, 1 folding sample data carries out model verifying, and calculates the validation error value of 1 folding sample data, so as to according to more A validation error value determines the preferable target winning rate prediction model of winning rate prediction effect.
In the concrete realization, the winning rate prediction model after training is verified by K folding-cross validation, it is available Corresponding multiple validation error values can determine the preferable winning rate prediction mould of winning rate prediction effect according to multiple validation error values Type.
In a preferred embodiment of an embodiment of the present invention, step 307 may include following sub-step:
Judge whether multiple validation error values meet default error threshold;
If so, the winning rate prediction model of default error threshold will be met as target winning rate prediction model.
In embodiments of the present invention, by obtaining multiple validation error values corresponding with winning rate prediction model, it can be determined that Whether multiple validation error values meet default error threshold, if so, the winning rate prediction model for meeting default error threshold is made For target winning rate prediction model;If it is not, then using multiple validation error values, the hyper parameter of winning rate prediction model is adjusted excellent Change.
In the concrete realization, multiple validation error values of the winning rate prediction model after obtaining multiple training, can will be multiple Validation error value is compared with default error threshold, is judged whether there is validation error value and is met default error threshold, if depositing The winning rate prediction model for presetting error threshold will then met as target winning rate prediction model, when there are multiple validation errors It is when value meets default error threshold, then the corresponding winning rate prediction model of the smallest validation error value of numerical value is pre- as target winning rate Survey model;Meet the validation error value of default error threshold if it does not exist, then mould is predicted to winning rate using multiple validation error values The hyper parameter of type is adjusted, until reaching preset validation error value.
In embodiments of the present invention, by obtaining number of training accordingly and initially winning rate prediction model, number of training According to including game camp data for training pattern, training feature vector is generated using training sample data, it is special using training Sign vector and initial winning rate prediction model are trained, during training, by by the feature vector of hot coding form Characteristic crossover is carried out, cross feature vector is obtained, cross feature vector is then inputted into initial winning rate prediction model and carries out model Training, to improve the relevance of primitive character, and then improves the accuracy that model predicts game winning rate.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, the embodiment of the present application is not limited by the described action sequence, because according to According to the embodiment of the present application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and related movement not necessarily the application is implemented Necessary to example.
Referring to Fig. 4, a kind of structural block diagram of the prediction meanss embodiment of game winning rate of the application is shown, it specifically can be with Including following module:
Characteristic obtains module 401, and for obtaining real-time game camp data, game camp data include trip The battle array combination in camp, game each side before play beginning;
Characteristic prediction module 402, it is described for game camp data to be inputted preset winning rate prediction model Winning rate prediction model has multiple output nodes;
Prediction result obtains module 403, what the multiple output node for obtaining the winning rate prediction model exported With multiple prediction results of game camp Data Matching;
Prediction result display module 404, for showing multiple prediction results of the multiple output node output.
In a kind of preferred embodiment of the embodiment of the present application, the characteristic prediction model may include following submodule Block:
First string vector generates submodule, for game camp data to be carried out vectorization, obtains the first camp Feature vector;
Predicted vector generates submodule, for first camp feature vector to be carried out characteristic crossover processing, generates pre- Survey feature vector;
Feature vector input submodule, for the predicted characteristics vector to be inputted the winning rate prediction model.
In a kind of preferred embodiment of the embodiment of the present application, the winning rate prediction model can be raw by following module At:
Training sample data obtain module, for obtaining number of training accordingly and initially winning rate prediction model, the instruction Practicing sample data includes the game camp data for training pattern;
Training feature vector generation module, for generating training feature vector using the training sample data;
Model training module, for being instructed using the training feature vector and the initial winning rate prediction model Practice.
In a kind of preferred embodiment of the embodiment of the present application, the training feature vector generation module may include:
Second battle array vector generates submodule, for the game camp data for being used for training pattern to be carried out vector Change, obtains second camp's feature vector;
Training vector generates submodule, for second camp feature vector to be carried out characteristic crossover processing, generates instruction Practice feature vector.
In a kind of preferred embodiment of the embodiment of the present application, the model training module may include:
It exports result and generates submodule, for the training feature vector to be inputted the initial winning rate prediction model, obtain To output result;
Gradient value generates submodule, for being exported according to described as a result, calculating the loss of the initial winning rate prediction model Function generates gradient value corresponding with the loss function;
Model parameter updates submodule, for according to the gradient value, the training initial winning rate prediction model;
Model training stopping modular, for when the gradient value minimizes, initial winning rate described in deconditioning to predict mould Type.
In a kind of preferred embodiment of the embodiment of the present application, the winning rate prediction model have input layer and with it is described The Softmax layer of input layer connection, described Softmax layers connect with the multiple output node respectively;The input layer has Multiple input nodes;Described Softmax layers for the output result of the input layer to be converted, and by the output after conversion As a result it is separately input to the multiple output node.
In a kind of preferred embodiment of the embodiment of the present application, the output result, which generates submodule, specifically can be used for:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
In a kind of preferred embodiment of the embodiment of the present application, the training sample data can also include being used for described The victory or defeat label of the game camp Data Matching of training pattern, the gradient value, which generates submodule, specifically can be used for:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss is calculated Function generates multiple gradient values.
In a kind of preferred embodiment of the embodiment of the present application, the model parameter, which updates submodule, specifically can be used for:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, the parameter of the activation primitive of each neuron of the input layer is then updated according to the multiple gradient value, Continue to train the timesharing prediction model.
Referring to Fig. 5, a kind of structural block diagram of the generating means embodiment of winning rate prediction model of the application is shown, specifically May include following module:
Training sample data obtain module 501, described for obtaining number of training accordingly and initially winning rate prediction model Training sample data include the game camp data for training pattern;
Training feature vector generation module 502, for generating training feature vector using the training sample data;
Model training module 503, for being carried out using the training feature vector and the initial winning rate prediction model Training.
In a kind of preferred embodiment of the embodiment of the present application, the training feature vector generation module may include:
Battle array vector generates submodule, for the game camp data for being used for training pattern to be carried out vectorization, obtains To camp's feature vector;
Training vector generates submodule, and for camp's feature vector to be carried out characteristic crossover processing, it is special to generate training Levy vector.
In a kind of preferred embodiment of the embodiment of the present application, the model training module may include:
It exports result and generates submodule, for the training feature vector to be inputted the initial winning rate prediction model, obtain To output result;
Gradient value generates submodule, for being exported according to described as a result, calculating the loss of the initial winning rate prediction model Function generates gradient value corresponding with the loss function;
Model parameter updates submodule, for according to the gradient value, the training initial winning rate prediction model;
Model training stopping modular, for when the gradient value minimizes, initial winning rate described in deconditioning to predict mould Type.
In a kind of preferred embodiment of the embodiment of the present application, the winning rate prediction model have input layer and with it is described The Softmax layer of input layer connection, described Softmax layers connect with the multiple output node respectively;The input layer has Multiple input nodes;Described Softmax layers for the output result of the input layer to be converted, and by the output after conversion As a result it is separately input to the multiple output node.
In a kind of preferred embodiment of the embodiment of the present application, the output result, which generates submodule, specifically can be used for:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
In a kind of preferred embodiment of the embodiment of the present application, the training sample data can also include being used for described The victory or defeat label of the game camp Data Matching of training pattern, the gradient value, which generates submodule, specifically can be used for:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss is calculated Function generates multiple gradient values.
In a kind of preferred embodiment of the embodiment of the present application, the model parameter, which updates submodule, specifically can be used for:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, the parameter of the activation primitive of each neuron of the input layer is then updated according to the multiple gradient value, Continue to train the timesharing prediction model.
In a kind of preferred embodiment of the embodiment of the present application, described device can also include:
It verifies sample data and obtains module, for obtaining verifying sample data and obtaining the winning rate prediction after multiple training Model, wherein the verifying sample data includes the game camp data for verifying model;
Feature vector generation module is verified, for generating verifying feature vector using the verifying sample data;
Model authentication module, the winning rate prediction model for inputting the verifying feature vector after multiple training are handed over Fork verifying, and calculate multiple validation error values of the winning rate prediction model after verifying;
Object module determining module, for determining target winning rate prediction model according to the multiple validation error value.
In a kind of preferred embodiment of the embodiment of the present application, the object module determining module may include:
Error amount Comparative sub-module, for judging whether the multiple validation error value meets default error threshold;
Object module determines submodule, for if so, will meet the winning rate prediction model of the default error threshold as The target winning rate prediction model.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
The embodiment of the present application also provides a kind of devices, comprising:
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, are executed when by one or more of processors When, so that described device executes method described in the embodiment of the present application.
The embodiment of the present application also provides one or more machine readable medias, are stored thereon with instruction, when by one or When multiple processors execute, so that the processor executes method described in the embodiment of the present application.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiments of the present application may be provided as method, apparatus or calculating Machine program product.Therefore, the embodiment of the present application can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present application can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present application is referring to according to the method for the embodiment of the present application, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although preferred embodiments of the embodiments of the present application have been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and all change and modification within the scope of the embodiments of the present application.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Prediction technique to a kind of game winning rate provided herein and device and a kind of winning rate prediction model above Generation method and device, be described in detail, principle and embodiment of the specific case to the application used herein It is expounded, the description of the example is only used to help understand the method for the present application and its core ideas;Meanwhile for Those of ordinary skill in the art have change according to the thought of the application in specific embodiments and applications Place, in conclusion the contents of this specification should not be construed as limiting the present application.

Claims (22)

1. a kind of prediction technique of game winning rate characterized by comprising
Real-time game camp data are obtained, game camp data include the battle array group in camp, game each side before game beginning It closes;
Game camp data are inputted into preset winning rate prediction model, the winning rate prediction model is saved with multiple outputs Point;
Obtain the multiple with game camp Data Matching of the multiple output node output of the winning rate prediction model Prediction result;
Show multiple prediction results of the multiple output node output.
2. the method according to claim 1, wherein described input preset winning rate for game camp data Prediction model, comprising:
Game camp data are subjected to vectorization, obtain first camp's feature vector;
First camp feature vector is subjected to characteristic crossover processing, generates predicted characteristics vector;
The predicted characteristics vector is inputted into the winning rate prediction model.
3. the method according to claim 1, wherein the winning rate prediction model generates in the following way:
Obtaining number of training, winning rate prediction model, the training sample data include the trip for training pattern accordingly and initially Play camp's data;
Training feature vector is generated using the training sample data;
It is trained using the training feature vector and the initial winning rate prediction model.
4. according to the method described in claim 3, it is characterized in that, described generate training characteristics using the training sample data Vector, comprising:
The game camp data for being used for training pattern are subjected to vectorization, generate second camp's feature vector;
Second camp feature vector is subjected to characteristic crossover processing, generates the training feature vector.
5. according to the method described in claim 3, it is characterized in that, described using the training feature vector and described initial Winning rate prediction model is trained, comprising:
The training feature vector is inputted into the initial winning rate prediction model, obtains output result;
According to the output as a result, calculating the loss function of the initial winning rate prediction model, generate and the loss function pair The gradient value answered;
According to the gradient value, the training initial winning rate prediction model;
When the gradient value minimizes, initial winning rate prediction model described in deconditioning.
6. -5 any method according to claim 1, which is characterized in that the winning rate prediction model have input layer and The Softmax layer connecting with the input layer, described Softmax layers connect with the multiple output node respectively;The input Layer has multiple input nodes;Described Softmax layers is used to convert the output result of the input layer, and will be after conversion Output result be separately input to the multiple output node.
7. according to the method described in claim 6, it is characterized in that, described input the initial victory for the cross feature vector Rate prediction model obtains output result, comprising:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
8. the method according to the description of claim 7 is characterized in that the training sample data further include with described for training The victory or defeat label of the game camp Data Matching of winning rate prediction model, it is described to be exported according to described as a result, calculating the initial victory The loss function of rate prediction model generates gradient value corresponding with the loss function, comprising:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss function is calculated, Generate multiple gradient values.
9. according to the method described in claim 8, it is characterized in that, described according to the gradient value, the training initial winning rate Prediction model, comprising:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, then updating the parameter of the activation primitive of each neuron of the input layer according to the multiple gradient value, continue The training timesharing prediction model.
10. a kind of generation method of winning rate prediction model, which is characterized in that including
Obtaining number of training, winning rate prediction model, the training sample data include the trip for training pattern accordingly and initially Play camp's data;
Training feature vector is generated using the training sample data;
It is trained using the training feature vector and the initial winning rate prediction model.
11. according to the method described in claim 10, it is characterized in that, described generate training spy using the training sample data Levy vector, comprising:
Game camp data are subjected to vectorization, generate camp's feature vector;
Camp's feature vector is subjected to characteristic crossover processing, generates the training feature vector.
12. according to the method described in claim 10, it is characterized in that, described using the training feature vector and described first Beginning winning rate prediction model is trained, comprising:
The training feature vector is inputted into the initial winning rate prediction model, obtains output result;
According to the output as a result, calculating the loss function of the initial winning rate prediction model, generate and the loss function pair The gradient value answered;
According to the gradient value, the training initial winning rate prediction model;
When the gradient value minimizes, initial winning rate prediction model described in deconditioning.
13. any method of 0-12 according to claim 1, which is characterized in that the winning rate prediction model has input layer And the Softmax layer being connect with the input layer, described Softmax layers connect with the multiple output node respectively;It is described Input layer has multiple input nodes;Described Softmax layers is used to convert the output result of the input layer, and will turn Output result after changing is separately input to the multiple output node.
14. according to the method for claim 13, which is characterized in that described that cross feature vector input is described initial Winning rate prediction model obtains output result, comprising:
It is mapped by the activation primitive pair of each neuron of the input layer with the training feature vector;
The output result of the input layer is transmitted to Softmax layers described.
15. according to the method for claim 14, which is characterized in that the training sample data further include with described for instructing Practice the victory or defeat label of the game camp Data Matching of model, it is described to be predicted according to the output as a result, calculating the initial winning rate The loss function of model generates gradient value corresponding with the loss function, comprising:
By the described Softmax layers output result using the input layer and the victory or defeat label, the loss function is calculated, Generate multiple gradient values.
16. according to the method for claim 15, which is characterized in that described according to the gradient value, the training initial victory Rate prediction model, comprising:
Judge whether the multiple gradient value meets preset threshold condition by the output node;
If it is not, then updating the parameter of the activation primitive of each neuron of the input layer according to the multiple gradient value, continue The training timesharing prediction model.
17. according to the method described in claim 10, it is characterized by further comprising:
It obtains verifying sample data and obtains the winning rate prediction model after multiple training, wherein the verifying sample data packet Include the game camp data for verifying model;
Verifying feature vector is generated using the verifying sample data;
By the verifying feature vector input the winning rate prediction model after multiple training carry out cross validation, and calculate verifying after Multiple validation error values of winning rate prediction model;
According to the multiple validation error value, target winning rate prediction model is determined.
18. according to the method for claim 17, which is characterized in that it is described according to the multiple validation error value, determine mesh Mark winning rate prediction model, comprising:
Judge whether the multiple validation error value meets default error threshold;
If so, the winning rate prediction model of the default error threshold will be met as the target winning rate prediction model.
19. a kind of prediction meanss of game winning rate characterized by comprising
Characteristic obtains module, and for obtaining real-time game camp data, game camp data include game beginning The battle array in camp, preceding game each side combines;
Characteristic prediction module, for game camp data to be inputted preset winning rate prediction model, the winning rate is pre- Surveying model has multiple output nodes;
Prediction result obtain module, for obtain the winning rate prediction model the multiple output node export with the trip The multiple prediction results for camp's Data Matching of playing;
Prediction result display module, for showing multiple prediction results of the multiple output node output.
20. a kind of training device of game winning rate model, which is characterized in that including
Training sample data obtain module, for obtaining number of training accordingly and initially winning rate prediction model, the trained sample Notebook data includes the game camp data for training pattern;
Training feature vector generation module, for generating training feature vector using the training sample data;
Model training module, for being trained using the training feature vector and the initial winning rate prediction model.
21. a kind of device characterized by comprising
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, when being executed by one or more of processors, are made The prediction technique that described device executes the game winning rate of one or more as claimed in claims 1-9 is obtained, or such as claim The generation method of one or more winning rate prediction models described in 10-18.
22. one or more machine readable medias, are stored thereon with instruction, when executed by one or more processors, so that Described device executes the prediction technique of the game winning rate of one or more as claimed in claims 1-9, or such as claim 10- The generation method of one or more winning rate prediction models described in 18.
CN201910168760.6A 2019-03-06 2019-03-06 A kind of prediction technique, model generating method and the device of game winning rate Pending CN109966743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910168760.6A CN109966743A (en) 2019-03-06 2019-03-06 A kind of prediction technique, model generating method and the device of game winning rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910168760.6A CN109966743A (en) 2019-03-06 2019-03-06 A kind of prediction technique, model generating method and the device of game winning rate

Publications (1)

Publication Number Publication Date
CN109966743A true CN109966743A (en) 2019-07-05

Family

ID=67077986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910168760.6A Pending CN109966743A (en) 2019-03-06 2019-03-06 A kind of prediction technique, model generating method and the device of game winning rate

Country Status (1)

Country Link
CN (1) CN109966743A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240544A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Data processing method, device and equipment for virtual scene and storage medium
CN111359227A (en) * 2020-03-08 2020-07-03 北京智明星通科技股份有限公司 Method, device and equipment for predicting fighting win and lose rate in fighting game
CN111617478A (en) * 2020-05-29 2020-09-04 腾讯科技(深圳)有限公司 Game formation intensity prediction method and device, electronic equipment and storage medium
CN111701234A (en) * 2020-06-01 2020-09-25 广州多益网络股份有限公司 Game winning rate prediction method
CN111905377A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN112138409A (en) * 2020-09-07 2020-12-29 腾讯科技(深圳)有限公司 Game result prediction method, device and storage medium
CN112402982A (en) * 2020-02-13 2021-02-26 上海哔哩哔哩科技有限公司 User cheating behavior detection method and system based on machine learning
CN112915538A (en) * 2021-03-11 2021-06-08 腾竞体育文化发展(上海)有限公司 Method and device for displaying game information, terminal and storage medium
CN114028816A (en) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 Information processing method and device in game and electronic terminal
CN114588634A (en) * 2022-03-08 2022-06-07 网易(杭州)网络有限公司 Method, apparatus, medium, and device for predicting game winning rate
CN116726500A (en) * 2023-08-09 2023-09-12 腾讯科技(深圳)有限公司 Virtual character control method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111925A (en) * 2001-10-05 2003-04-15 Aruze Corp Game machine, method for predicting game state in the same, storage medium, and server
JP2008029472A (en) * 2006-07-27 2008-02-14 Samii Kk Mahjong pinball machine
CN106878409A (en) * 2017-02-09 2017-06-20 深圳市莫二科技有限公司 A kind of game data processing system and processing method
CN106919790A (en) * 2017-02-16 2017-07-04 网易(杭州)网络有限公司 The role of game recommends, battle array construction method and device, method for gaming and device
CN107679491A (en) * 2017-09-29 2018-02-09 华中师范大学 A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data
CN107998661A (en) * 2017-12-26 2018-05-08 苏州大学 A kind of aid decision-making method, device and the storage medium of online battle game
CN108392828A (en) * 2018-03-16 2018-08-14 深圳冰川网络股份有限公司 A kind of player's On-line matching method and system for the game of MOBA classes
CN108888947A (en) * 2018-05-25 2018-11-27 南京邮电大学 The interpretation system played chess for Chinese chess
CN109065072A (en) * 2018-09-30 2018-12-21 中国科学院声学研究所 A kind of speech quality objective assessment method based on deep neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111925A (en) * 2001-10-05 2003-04-15 Aruze Corp Game machine, method for predicting game state in the same, storage medium, and server
JP2008029472A (en) * 2006-07-27 2008-02-14 Samii Kk Mahjong pinball machine
CN106878409A (en) * 2017-02-09 2017-06-20 深圳市莫二科技有限公司 A kind of game data processing system and processing method
CN106919790A (en) * 2017-02-16 2017-07-04 网易(杭州)网络有限公司 The role of game recommends, battle array construction method and device, method for gaming and device
CN107679491A (en) * 2017-09-29 2018-02-09 华中师范大学 A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data
CN107998661A (en) * 2017-12-26 2018-05-08 苏州大学 A kind of aid decision-making method, device and the storage medium of online battle game
CN108392828A (en) * 2018-03-16 2018-08-14 深圳冰川网络股份有限公司 A kind of player's On-line matching method and system for the game of MOBA classes
CN108888947A (en) * 2018-05-25 2018-11-27 南京邮电大学 The interpretation system played chess for Chinese chess
CN109065072A (en) * 2018-09-30 2018-12-21 中国科学院声学研究所 A kind of speech quality objective assessment method based on deep neural network

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240544B (en) * 2020-01-06 2020-11-24 腾讯科技(深圳)有限公司 Data processing method, device and equipment for virtual scene and storage medium
CN111240544A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Data processing method, device and equipment for virtual scene and storage medium
CN112402982A (en) * 2020-02-13 2021-02-26 上海哔哩哔哩科技有限公司 User cheating behavior detection method and system based on machine learning
CN111359227A (en) * 2020-03-08 2020-07-03 北京智明星通科技股份有限公司 Method, device and equipment for predicting fighting win and lose rate in fighting game
CN111617478B (en) * 2020-05-29 2023-03-03 腾讯科技(深圳)有限公司 Game formation intensity prediction method and device, electronic equipment and storage medium
CN111617478A (en) * 2020-05-29 2020-09-04 腾讯科技(深圳)有限公司 Game formation intensity prediction method and device, electronic equipment and storage medium
CN111701234A (en) * 2020-06-01 2020-09-25 广州多益网络股份有限公司 Game winning rate prediction method
CN111905377A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN112138409A (en) * 2020-09-07 2020-12-29 腾讯科技(深圳)有限公司 Game result prediction method, device and storage medium
CN112138409B (en) * 2020-09-07 2023-11-24 腾讯科技(深圳)有限公司 Game result prediction method, device and storage medium
CN112915538A (en) * 2021-03-11 2021-06-08 腾竞体育文化发展(上海)有限公司 Method and device for displaying game information, terminal and storage medium
CN114028816A (en) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 Information processing method and device in game and electronic terminal
CN114588634A (en) * 2022-03-08 2022-06-07 网易(杭州)网络有限公司 Method, apparatus, medium, and device for predicting game winning rate
CN116726500A (en) * 2023-08-09 2023-09-12 腾讯科技(深圳)有限公司 Virtual character control method and device, electronic equipment and storage medium
CN116726500B (en) * 2023-08-09 2023-11-03 腾讯科技(深圳)有限公司 Virtual character control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109966743A (en) A kind of prediction technique, model generating method and the device of game winning rate
Price Using co-evolutionary programming to simulate strategic behaviour in markets
CN111282267B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN109847367A (en) A kind of prediction technique, model generating method and the device of game winning rate
Chen et al. Which heroes to pick? learning to draft in moba games with neural networks and tree search
CN109621431A (en) A kind for the treatment of method and apparatus of game action
CN112016704B (en) AI model training method, model using method, computer device and storage medium
CN109908591A (en) A kind of decision-making technique of virtual objects, model building method and device
CN111701240B (en) Virtual article prompting method and device, storage medium and electronic device
WO2023138156A1 (en) Decision model training method and apparatus, device, storage medium and program product
Xu et al. Composite motion learning with task control
Susanto et al. Maze generation based on difficulty using genetic algorithm with gene pool
CN111882072B (en) Intelligent model automatic course training method for playing chess with rules
Zamorano et al. The Quest for Content: A Survey of Search-Based Procedural Content Generation for Video Games
CN115496191A (en) Model training method and related device
Rupp et al. GEEvo: Game Economy Generation and Balancing with Evolutionary Algorithms
Sun et al. Research on action strategies and simulations of DRL and MCTS-based intelligent round game
Patrascu et al. Artefacts: Minecraft meets collaborative interactive evolution
Baldominos et al. Learning levels of mario ai using genetic algorithms
Ling et al. Master multiple real-time strategy games with a unified learning model using multi-agent reinforcement learning
Flimmel et al. Coevolution of AI and Level Generators for Super Mario Game
Chen et al. Research on turn-based war chess game based on reinforcement learning
Reis et al. Automatic generation of a sub-optimal agent population with learning
CN117883788B (en) Intelligent body training method, game fight method and device and electronic equipment
Тилитченко Application of Q-learning with approximation for realization of game projects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190705