CN117753002A - Game picture determining method and device, electronic equipment and medium - Google Patents

Game picture determining method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117753002A
CN117753002A CN202311780912.0A CN202311780912A CN117753002A CN 117753002 A CN117753002 A CN 117753002A CN 202311780912 A CN202311780912 A CN 202311780912A CN 117753002 A CN117753002 A CN 117753002A
Authority
CN
China
Prior art keywords
game
target
control behavior
behavior
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311780912.0A
Other languages
Chinese (zh)
Inventor
司美玲
张连生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Interactive Entertainment Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311780912.0A priority Critical patent/CN117753002A/en
Publication of CN117753002A publication Critical patent/CN117753002A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a method, an apparatus, an electronic device, and a medium for determining a game screen, where the method includes: acquiring operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of a game player in a target game scene; predicting target control behaviors of the game player at future time according to the operation behavior data; if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene; the target game picture is a picture of a game player executing target control behaviors in a target game scene. The method and the device can realize high-efficiency prediction of the game picture and promote the fluency of the game.

Description

Game picture determining method and device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of games, and in particular relates to a method and a device for determining a game picture, electronic equipment and a medium.
Background
Cloud gaming (Cloud gaming) is an online gaming technology based on a Cloud computing technology, along with the continuous development of Cloud gaming and 5G technology, the number of Cloud game players is increasing, the transmission requirement of Cloud to Cloud game data is becoming larger, and in order to ensure the smoothness of the Cloud game, the Cloud game needs to be predicted according to operation instructions of the game players in the Cloud game to generate corresponding game pictures.
In the related art, a next instruction is predicted according to a current instruction of a game player, and a predicted picture is generated according to the next instruction. The related technology predicts all game pictures corresponding to all instructions, so that the prediction mode of the related technical scheme is too frequent, the processing efficiency of the game is reduced, and the occupation of the cloud storage space is excessive, so that the resource waste is caused.
Disclosure of Invention
The disclosure provides a method and device for determining game pictures, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a method of determining a game screen, the method including:
acquiring operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of the game player in the target game scene;
predicting a target control behavior of the game player at a future time according to the operation behavior data;
if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene; the target game picture is a picture of the game player executing the target control action under the target game scene.
Further, after generating a target game screen according to the target control behavior and the target game scene, the method further comprises:
determining a control behavior matching the game operation instruction in response to the game operation instruction of the game player;
and sending a rendering result of the target game picture to a game terminal under the condition that the control behavior matched with the game operation instruction is the target control behavior.
Further, after sending the rendering result of the target game screen to the game terminal, the method further includes:
detecting whether the current game scene is the same as the target game scene or not at regular time;
judging whether the current game scene is a historical game scene or not under the condition that the current game scene is different from the target game scene;
and under the condition that the current game scene is not the historical game scene, predicting the target control behavior of the game player again.
Further, the predicting the target control behavior of the game player at the future time according to the operation behavior data includes:
inputting the operation behavior data into a random forest model for processing to obtain a target classification result; wherein the classification result is used for indicating the possibility of each preset control action performed by the game player at the future moment;
And determining the target control behavior in the preset control behaviors based on the target classification result.
Further, the step of inputting the operation behavior data into a random forest model for processing to obtain a classification result includes:
inputting the operation behavior data into each decision tree model of the random forest model for processing to obtain a plurality of sub-classification results; wherein, each decision tree model correspondingly outputs a sub-classification result;
and determining the sub-classification results as the target classification result.
Further, the determining the target control behavior in the preset control behaviors based on the target classification result includes:
based on the sub-classification results, determining a prediction result of each decision tree model in the random forest model on each preset control behavior; the prediction result is used for indicating the possibility that the preset control behavior is predicted to be the target control behavior by the decision tree model;
determining weight information of each decision tree model;
carrying out weighted summation calculation on the prediction result and the weight information to obtain voting information of each preset control behavior; the voting information is used for indicating the probability that the preset control behavior is the target control behavior;
And determining the preset control behavior corresponding to the voting information meeting the preset threshold requirement as the target control behavior.
Further, the method further comprises:
obtaining a mapping relation between a preset control behavior and the target game scene; wherein the mapping relation characterizes the target game picture of the preset control action in the target game scene;
and determining the target game picture according to the mapping relation.
According to a second aspect of the present disclosure, there is provided a game screen determination apparatus, the apparatus including:
the first acquisition module is used for acquiring operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of the game player in the target game scene;
the first prediction module is used for predicting target control behaviors of the game player at future time according to the operation behavior data;
the generation module is used for generating a target game picture according to the target control behavior and the target game scene if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement; the target game picture is a picture of the game player executing the target control action under the target game scene.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method as described above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the above-described method of the present disclosure.
The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for determining a game picture. In the embodiment of the application, the operation behavior data of the game player in the game can be acquired; the operation behavior data are used for indicating behavior characteristics of a game player in a target game scene; predicting target control behaviors of the game player at future time according to the operation behavior data; if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene; the target game picture is a picture of a game player executing target control behaviors in a target game scene.
In the above embodiment, the future control behavior can be predicted by predicting the target control behavior that the probability of being triggered at the future time meets the preset probability requirement, and the game picture of the high-frequency control behavior can be generated by generating the picture of the target control behavior executed by the game player in the target game scene.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 is a flow chart of a determination of a game screen provided by an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of a determination of a game screen provided by another exemplary embodiment of the present disclosure;
FIG. 3 is a flow chart of a determination of a game screen provided by another exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart of a determination of a game screen provided by another exemplary embodiment of the present disclosure;
FIG. 5 is a schematic workflow diagram of a random forest model provided in an exemplary embodiment of the present disclosure;
FIG. 6 is a functional block diagram of a determination device of a game screen provided in an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window. It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
In one embodiment, as shown in fig. 1, there is provided a method for determining a game screen, including the steps of:
step 101, obtaining operation behavior data of a game player in a game.
In the embodiment of the disclosure, operation behavior data of a game player in a game is firstly obtained, wherein the operation behavior data can comprise operation data, behavior data and scene information of the game at the current moment.
In one possible embodiment, operational behavior data of a game player in a game is obtained, the operational behavior data including operational data, behavior data, and target scene information of a current game. The operation data may be understood as relevant operation data of the game player on the game terminal, for example, the operation data may be relevant operation data such as mouse click, keyboard click, key input, etc.; behavior data may be understood as data about the behavior of a game player in a game, for example, such behavior data may include the movement, attack, and skill of use of the game player, etc.; the target scene information may include scene background, layout, objects, etc., without limitation.
Here, the behavior characteristics of the game player in the target game scene can be determined from the operation behavior data, from which the target control behavior of the game player at a future time can be predicted.
Step 102, predicting target control behaviors of the game player at future time according to the operation behavior data.
Here, a prediction model may be previously established, and the operation behavior data may be processed according to the prediction model, so that the target control behavior that the game player may trigger at a future time may be predicted according to the processing result.
In one possible embodiment, the prediction model may be a model constructed based on a random forest algorithm (i.e., a random forest model), at which time, operational behavior data of the game player may be input into the random forest model for processing, so as to obtain a target classification result. Wherein the objective classification result is used to indicate the likelihood (i.e., probability) that each preset control action is performed by the game player at a future time. Then, a target control behavior is determined among the preset control behaviors based on the target classification result.
Step 103, if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene.
Here, after determining the target control behavior, if the probability that the game player triggers the target control behavior at the future time satisfies the preset probability requirement. For example, the probability that the game player triggers the target control action at the future time is greater than a preset probability threshold, at which point the target control action may be referred to as a high frequency control action. The target game screen may be generated according to the high-frequency control behavior and the target game scene, wherein the target game screen is a screen for the game player to execute the target control behavior under the target game scene.
In one possible embodiment, according to the predicted target control behavior and target scene information of the game, building a model through a machine learning method to learn association information between the target control behavior and the target scene information, matching the target control behavior with the target game scene according to the association information, and generating a target game picture for a game player to execute the target control behavior under the target game scene.
The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for determining a game picture. In the embodiment of the application, the operation behavior data of the game player in the game can be acquired; the operation behavior data are used for indicating behavior characteristics of a game player in a target game scene; predicting target control behaviors of the game player at future time according to the operation behavior data; if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene; the target game picture is a picture of a game player executing target control behaviors in a target game scene.
In the above embodiment, the future control behavior can be predicted by predicting the target control behavior that the probability of being triggered at the future time meets the preset probability requirement, and the game picture of the high-frequency control behavior can be generated by generating the picture of the target control behavior executed by the game player in the target game scene.
In one embodiment, as shown in fig. 2, there is also provided a method for determining a game screen, including the steps of:
in step 201, in response to a game operation instruction of a game player, a control behavior matching the game operation instruction is determined.
After generating the target game screen according to the target control behavior and the target game scene, a game operation instruction of a game player may be detected, and a control behavior matching the game operation instruction may be determined in response to the game operation instruction of the game player.
In one possible embodiment, after generating the target game screen according to the target control behavior and the target game scene, the generated target game screen may be stored in the server. The server may detect a game operation instruction of the game player, and determine a control behavior matching the game operation instruction in response to the game operation instruction of the game player, and determine whether the control behavior is a target control behavior predicted before.
And 202, transmitting a rendering result of the target game picture to the game terminal under the condition that the control behavior matched with the game operation instruction is determined to be the target control behavior.
After the server determines the control behavior matched with the game operation instruction, if it is determined that the control behavior matched with the game operation instruction is a target control behavior, at this time, the target game picture may be rendered in the server, and a rendering result of the target game picture may be sent to the game terminal.
In one possible embodiment, in the case that it is determined that the control behavior matched with the game operation instruction is the target control behavior, the control behavior generated by the game operation instruction is matched with the target game screen stored in the server, and if the matching is successful, the target game screen stored in the server is rendered and the rendering result of the target game screen is sent to the game terminal.
In the embodiment, under the condition that the control behavior matched with the game operation instruction is determined to be the control behavior, a rendering result of the target game picture is sent to the game terminal, so that the quick response of the game picture is realized; the server stores the target game picture aiming at the target control behavior, so that the storage resource of the server can be saved; and rendering is performed after the target control behavior is matched, so that the processing efficiency is improved.
In one embodiment, as shown in fig. 3, there is also provided a method for determining a game screen, including the steps of:
in step 301, it is checked whether the current game scene is identical to the target game scene.
In one possible embodiment, after the rendering result of the target game screen is transmitted to the game terminal, it is also possible to detect whether the current game scene is identical to the target game scene at a timing.
Step 302, judging whether the current game scene is a historical game scene or not under the condition that the current game scene is different from the target game scene.
In one possible embodiment, in a case where it is determined that the current game scene is not identical to the target game scene, it is determined whether the current game scene is a history game scene. For example, assuming that the current game scene is scene a and the target game scene is scene B, it may be determined that the current game scene is different from the target game scene, and then it is continuously determined whether the current game scene is a history game scene, that is, whether the current game scene has generated a corresponding game screen during the previous game player game.
And 303, predicting the target control behavior of the game player again under the condition that the current game scene is not the historical game scene.
In one possible embodiment, in the case that it is determined that the current game scene is not the history game scene, it is necessary to re-predict the target control behavior in the current game scene and generate a target game screen of the re-predicted target control behavior.
In the embodiment, whether the current game scene is a historical game scene or not is judged under the condition that the current game scene is determined to be different from the target game scene; under the condition that the current game scene is not the historical game scene, predicting the target control behavior of the game player again, and finally generating a rendering result of the target game picture, thereby improving the consistency and adaptability of the game picture.
Based on the above embodiment, in still another embodiment provided in the present disclosure, as shown in fig. 4, the step 102 may specifically further include the following steps:
and step 401, inputting the operation behavior data into a random forest model for processing to obtain a target classification result.
Here, after the operation behavior data of the game player in the game is acquired, the operation behavior data is input into the random forest model for processing, and the target classification result is obtained. Wherein the target classification result is used for indicating the possibility that the game player performs each preset control action at the future time.
In one possible embodiment, the operational behavior data may be input into each decision tree model of the random forest model for processing to obtain a plurality of sub-classification results. Wherein, each decision tree model correspondingly outputs a sub-classification result; the plurality of sub-classification results are determined as target classification results.
In embodiments of the present disclosure, a random forest model may be trained by:
fig. 5 schematically shows a workflow diagram of a random forest model. Firstly, an original training set is obtained, and then, the operation behavior data of a game player in the original training set D is repeatedly and randomly sampled for N times by utilizing a Bagging algorithm, so that a training subset with the scale of N is obtained. Repeating the above process K times to obtain training subset { D ] 1 ,D 2 ,D 3 ,...D k }。
For each training subset D i (i is more than or equal to 1 and less than or equal to K), and generating a binary recursive decision tree model without pruning by adopting a CART algorithm. At each intermediate node of the decision tree model, it is required to follow the following rules: instead of selecting the optimal segmentation feature from all the features, a feature subset is constructed by randomly selecting O features (O is less than or equal to M), and then the feature corresponding to the optimal segmentation form is selected from the feature subset. The CART decision tree needs to continue the above process for node splitting until a specific termination condition is reached. Finally for each training subset D i Generating a corresponding decision tree model h i (D i ) Decision tree model h i (D i ) Combining to form random forest model { h 1 (D 1 ),h 2 (D 2 ),...h i (D i )}。
After constructing the random forest model in the above-described manner, the operation behavior data of the game player can be tested as the training set X, and each decision tree model correspondingly outputs a sub-classification result to obtain a corresponding classification result { C } 1 (X),C 2 (X),...C k (X) and sorting the multiple sub-sorting results { C } 1 (X),C 2 (X),...C k (X) } acknowledgementAnd (5) determining a target classification result.
Step 402, determining a target control behavior in the preset control behaviors based on the target classification result.
Here, after the operation behavior data is input into the random forest model to be processed, the target classification result is obtained, and the target control behavior may be determined among the preset control behaviors based on the target classification result.
In a possible embodiment, determining the target control behavior in the preset control behaviors based on the target classification result includes the following steps:
based on the sub-classification results, determining a prediction result of each decision tree model in the random forest model on each preset control behavior; the prediction result is used for indicating the possibility that the preset control behavior is predicted to be the target control behavior by the decision tree model;
determining weight information of each decision tree model;
carrying out weighted summation calculation on the prediction result and the weight information to obtain voting information of each preset control behavior; the voting information is used for indicating the probability that the preset control behavior is a target control behavior;
and determining the preset control behavior corresponding to the voting information meeting the preset threshold requirement as a target control behavior.
Specifically, the operation behavior data is classified through decision tree models in the random forest model, and each decision tree model correspondingly outputs a sub-classification result. The preset control behaviors are, for example, behavior 1, behavior 2 and behavior 3, at this time, the decision tree model correspondingly outputs 5 sub-classification results 1, 2, 3, 4 and 5, the decision tree model 1 predicts that the preset control behavior 1 is the target control behavior, the decision tree 2 predicts that the preset control behavior 1 is the target control behavior, the decision tree 3 predicts that the preset control behavior 1 is the target control behavior, the decision tree 4 predicts that the preset control behavior 2 is the target control behavior, and the decision tree 5 predicts that the preset control behavior 3 is the target control behavior.
Then, determining the weight information of each decision tree model, and comparing the prediction result with the weight informationAnd carrying out weighted summation calculation to obtain voting information of each preset control action. Wherein the voting information is used for indicating the probability that the preset control behavior is the target control behavior, the weighted voting method mainly carries out weighted statistics on the results of each decision tree, and the voting information is recorded as S c The calculation formula is specifically as follows:
wherein T is c,x And (X) takes a value of 1 or 0, if the preset control behavior is determined to be the target control behavior after being classified by the decision tree, taking a value of 1, otherwise taking a value of 0.W (W) t For the weight information, the weight information of the 5 decision trees is illustratively 0.1, 0.3, 0.4 and 0.1, the decision tree 1 predicts that the preset control behavior 1 is the target control behavior, the decision tree 2 predicts that the preset control behavior 1 is the target control behavior, the decision tree 3 predicts that the preset control behavior 1 is the target control behavior, the decision tree 4 predicts that the preset control behavior 2 is the target control behavior, and the decision tree 5 predicts that the preset control behavior 3 is the target control behavior. Voting information S of preset control behavior 1 c =1×0.1+1×0.1+1×0.3+0×0.4+0×0.1=0.5; voting information S of preset control behavior 2 c =0.1+0.1+0.3+1.0.4+0.1=0.4; voting information S of preset control behavior 3 c =0*0.1+0*0.1+0*0.3+0*0.4+1*0.1=0.1。
Finally, determining the preset control behavior corresponding to the voting information meeting the preset threshold requirement as the target control behavior, wherein the voting information S of the preset control behavior 1 is exemplified c 0.5, preset voting information S of control behavior 2 c 0.4, preset voting information S of control behavior 3 c And (3) selecting a preset control behavior 1 corresponding to the maximum value of the voting information from the three preset control behaviors to be 0.1, and determining the preset control behavior 1 as a target control behavior.
In this embodiment, the weight information of each decision tree model is determined, weighted summation is performed on the prediction result and the weight information to obtain voting information of each preset control behavior, and the preset control behavior corresponding to the voting information meeting the preset threshold requirement is determined as the target control behavior. The determined target control behavior has high reliability, and the accuracy of the method for determining the game picture is improved.
In one embodiment, there is also provided a method of determining a game screen, including the steps of:
and obtaining a mapping relation between the preset control behavior and the target game scene, and determining a target game picture according to the mapping relation, wherein the mapping relation characterizes the target game picture of the preset control behavior in the target game scene.
In one possible embodiment, a mapping relationship between a set of control actions and a target game scenario may be preset according to game design and game rules. And fine-tuning or correcting the target game picture which is predicted and generated according to the target control behavior by the mapping relation according to the current control behavior of the game player and the current game scene information. For example, according to the game design and rules, if the game player needs to take off at high frequency in the game scene a, the preset control behavior is take off, the take off control behavior corresponds to the scene a, and the server generates in advance a target game screen a corresponding to the take off of the game player in the game scene a. The target game screen a is used as a supplement to the target control action to generate a corresponding target game screen at step 103.
In this embodiment, by acquiring the mapping relationship between the preset control behavior and the target game scene, the mapping relationship is preset according to the game design and the game rule, and the target game picture is determined according to the mapping relationship, and the target game picture is used as a supplement of the target control behavior to generate the corresponding target game picture, so that the flexibility of the game picture determining method is improved.
In the case of dividing each functional module by corresponding each function, the embodiments of the present disclosure provide a determination device of a game screen, which may be a server or a chip applied to the server. Fig. 6 is a functional block diagram of a determination apparatus of a game screen provided in an exemplary embodiment of the present disclosure. As shown in fig. 6, the apparatus for determining a game screen includes:
A first obtaining module 601, configured to obtain operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of the game player in the target game scene;
a first prediction module 602, configured to predict a target control behavior of the game player at a future time according to the operation behavior data;
a generating module 603, configured to generate a target game picture according to the target control behavior and the target game scene if the probability that the game player triggers the target control behavior at a future time meets a preset probability requirement; the target game picture is a picture of the game player executing the target control action under the target game scene.
In one embodiment, the apparatus further comprises:
a response module for determining a control behavior matching the game operation instruction in response to the game operation instruction of the game player;
and the sending module is used for sending the rendering result of the target game picture to the game terminal under the condition that the control behavior matched with the game operation instruction is the target control behavior.
In one embodiment, the apparatus further comprises:
The detection module is used for detecting whether the current game scene is the same as the target game scene or not;
the judging module is used for judging whether the current game scene is a historical game scene or not under the condition that the current game scene is determined to be different from the target game scene;
and the second prediction module is used for predicting the target control behavior of the game player again under the condition that the current game scene is not the historical game scene.
In one embodiment, the first prediction module 602 includes:
the first processing unit is used for inputting the operation behavior data into a random forest model for processing to obtain a target classification result; wherein the classification result is used for indicating the possibility of each preset control action performed by the game player at the future moment;
and the first determining unit is used for determining the target control behavior in the preset control behaviors based on the target classification result.
In one embodiment, the first prediction module 602 includes:
the second processing unit is used for inputting the operation behavior data into each decision tree model of the random forest model for processing to obtain a plurality of sub-classification results; wherein, each decision tree model correspondingly outputs a sub-classification result;
And a second determining unit configured to determine the plurality of sub-classification results as the target classification result.
In one embodiment, the first prediction module 602 includes:
a third determining unit, configured to determine, based on the multiple sub-classification results, a prediction result of each decision tree model in the random forest model for each preset control behavior; the prediction result is used for indicating the possibility that the preset control behavior is predicted to be the target control behavior by the decision tree model;
a fourth determining unit, configured to determine weight information of each decision tree model;
the calculation unit is used for carrying out weighted summation calculation on the prediction result and the weight information to obtain voting information of each preset control behavior; the voting information is used for indicating the probability that the preset control behavior is the target control behavior;
and a fifth determining unit, configured to determine, as the target control behavior, a preset control behavior corresponding to voting information that meets a preset threshold requirement.
In one embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring a mapping relation between a preset control behavior and the target game scene; wherein the mapping relation characterizes the target game picture of the preset control action in the target game scene;
And the determining module is used for determining the target game picture according to the mapping relation.
The embodiment of the disclosure also provides an electronic device, including: at least one processor; a memory for storing the at least one processor-executable instruction; wherein the at least one processor is configured to execute the instructions to implement the above-described methods disclosed by embodiments of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 includes at least one processor 701 and a memory 702 coupled to the processor 701, the processor 701 may perform the respective steps of the above-described methods disclosed in the embodiments of the present disclosure.
The processor 701 may also be referred to as a central processing unit (central processing unit, CPU), which may be an integrated circuit chip with signal processing capabilities. The steps of the above-described methods disclosed in the embodiments of the present disclosure may be accomplished by instructions in the form of integrated logic circuits or software in hardware in the processor 701. The processor 701 may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may reside in a memory 702 such as random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The processor 701 reads the information in the memory 702 and, in combination with its hardware, performs the steps of the method described above.
In addition, various operations/processes according to the present disclosure, in the case of being implemented by software and/or firmware, may be installed from a storage medium or network to a computer system having a dedicated hardware structure, such as the computer system 800 shown in fig. 8, which is capable of performing various functions including functions such as those described previously, and the like, when various programs are installed. Fig. 8 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Computer system 800 is intended to represent various forms of digital electronic computing devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the computer system 800 includes a computing unit 801, and the computing unit 801 can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the computer system 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in computer system 800 are connected to I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the computer system 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. Communication unit 809 allows computer system 800 to exchange information/data with other devices over a network, such as the internet, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above. For example, in some embodiments, the above-described methods disclosed by embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 802 and/or the communication unit 809. In some embodiments, the computing unit 801 may be configured by any other suitable means (e.g., by means of firmware) to perform the above-described methods disclosed by embodiments of the present disclosure.
The disclosed embodiments also provide a computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the above-described method disclosed by the disclosed embodiments.
A computer readable storage medium in embodiments of the present disclosure may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium described above can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specifically, the computer-readable storage medium described above may include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The disclosed embodiments also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described methods of the disclosed embodiments.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computers may be connected to the user computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computers.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules, components or units referred to in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module, component or unit does not in some cases constitute a limitation of the module, component or unit itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The above description is merely illustrative of some embodiments of the present disclosure and of the principles of the technology applied. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A method for determining a game screen, the method comprising:
acquiring operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of the game player in the target game scene;
predicting a target control behavior of the game player at a future time according to the operation behavior data;
if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement, generating a target game picture according to the target control behavior and the target game scene; the target game picture is a picture of the game player executing the target control action under the target game scene.
2. The method of claim 1, wherein after generating a target game screen from the target control behavior and the target game scene, the method further comprises:
determining a control behavior matching the game operation instruction in response to the game operation instruction of the game player;
and sending a rendering result of the target game picture to a game terminal under the condition that the control behavior matched with the game operation instruction is the target control behavior.
3. The method according to claim 2, wherein after transmitting the rendering result of the target game screen to a game terminal, the method further comprises:
detecting whether the current game scene of the game is the same as the target game scene or not at regular time;
judging whether the current game scene is a historical game scene or not under the condition that the current game scene is different from the target game scene;
and under the condition that the current game scene is not the historical game scene, predicting the target control behavior of the game player again.
4. The method of claim 1, wherein predicting a target control behavior of the game player at a future time based on the operational behavior data comprises:
inputting the operation behavior data into a random forest model for processing to obtain a target classification result; wherein the classification result is used for indicating the possibility of each preset control action executed by the game player at the future moment;
and determining the target control behavior in the preset control behaviors based on the target classification result.
5. The method of claim 4, wherein the inputting the operational behavior data into a random forest model for processing to obtain a classification result comprises:
Inputting the operation behavior data into each decision tree model of the random forest model for processing to obtain a plurality of sub-classification results; wherein, each decision tree model correspondingly outputs a sub-classification result;
and determining the sub-classification results as the target classification result.
6. The method of claim 5, wherein determining the target control behavior in the preset control behaviors based on the target classification result comprises:
based on the sub-classification results, determining a prediction result of each decision tree model in the random forest model on each preset control behavior; the prediction result is used for indicating the possibility that the preset control behavior is predicted to be the target control behavior by the decision tree model;
determining weight information of each decision tree model;
carrying out weighted summation calculation on the prediction result and the weight information to obtain voting information of each preset control behavior; the voting information is used for indicating the probability that the preset control behavior is the target control behavior;
and determining the preset control behavior corresponding to the voting information meeting the preset threshold requirement as the target control behavior.
7. The method according to claim 1, wherein the method further comprises:
obtaining a mapping relation between a preset control behavior and the target game scene; wherein the mapping relation characterizes the target game picture of the preset control action in the target game scene;
and determining the target game picture according to the mapping relation.
8. A game screen determination apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring operation behavior data of a game player in a game; the operation behavior data are used for indicating behavior characteristics of the game player in the target game scene;
the first prediction module is used for predicting target control behaviors of the game player at future time according to the operation behavior data;
the generation module is used for generating a target game picture according to the target control behavior and the target game scene if the probability of triggering the target control behavior by the game player at the future moment meets the preset probability requirement; the target game picture is a picture of the game player executing the target control action under the target game scene.
9. An electronic device, comprising:
at least one processor;
a memory for storing the at least one processor-executable instruction;
wherein the at least one processor is configured to execute the instructions to implement the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1-7.
CN202311780912.0A 2023-12-21 2023-12-21 Game picture determining method and device, electronic equipment and medium Pending CN117753002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311780912.0A CN117753002A (en) 2023-12-21 2023-12-21 Game picture determining method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311780912.0A CN117753002A (en) 2023-12-21 2023-12-21 Game picture determining method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117753002A true CN117753002A (en) 2024-03-26

Family

ID=90310040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311780912.0A Pending CN117753002A (en) 2023-12-21 2023-12-21 Game picture determining method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117753002A (en)

Similar Documents

Publication Publication Date Title
KR102480204B1 (en) Continuous learning for intrusion detection
US10726335B2 (en) Generating compressed representation neural networks having high degree of accuracy
CN109657539B (en) Face value evaluation method and device, readable storage medium and electronic equipment
CN111160624B (en) User intention prediction method, user intention prediction device and terminal equipment
CN112132017B (en) Image processing method and device and electronic equipment
EP3206367A1 (en) Techniques for detecting attacks in a publish-subscribe network
CN108875502B (en) Face recognition method and device
CN114726823B (en) Domain name generation method, device and equipment based on generation countermeasure network
CN114662006B (en) End cloud collaborative recommendation system and method and electronic equipment
CN113627298A (en) Training method of target detection model and method and device for detecting target object
CN113378161A (en) Security detection method, device, equipment and storage medium
CN110704614B (en) Information processing method and device for predicting user group type in application
CN108543313B (en) Data processing method and device, medium and computing equipment
CN117753002A (en) Game picture determining method and device, electronic equipment and medium
CN116257885A (en) Private data communication method, system and computer equipment based on federal learning
US20220092176A1 (en) Apparatuses and methods for detecting malware
CN111046933B (en) Image classification method, device, storage medium and electronic equipment
CN113901456A (en) User behavior security prediction method, device, equipment and medium
US11899793B2 (en) Information processing apparatus, control method, and program
CN114036551A (en) Data processing method and device for private data, computer equipment and medium
CN112541548A (en) Relational network generation method and device, computer equipment and storage medium
CN114596638A (en) Face living body detection method, device and storage medium
CN110801630A (en) Cheating program determining method, device, equipment and storage medium
CN113657353B (en) Formula identification method and device, electronic equipment and storage medium
CN115482422B (en) Training method of deep learning model, image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination