CN117618918A - Virtual scene processing method and device, electronic equipment and storage medium - Google Patents

Virtual scene processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117618918A
CN117618918A CN202410102302.3A CN202410102302A CN117618918A CN 117618918 A CN117618918 A CN 117618918A CN 202410102302 A CN202410102302 A CN 202410102302A CN 117618918 A CN117618918 A CN 117618918A
Authority
CN
China
Prior art keywords
feature
sample
features
historical
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410102302.3A
Other languages
Chinese (zh)
Other versions
CN117618918B (en
Inventor
谭莲芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410102302.3A priority Critical patent/CN117618918B/en
Publication of CN117618918A publication Critical patent/CN117618918A/en
Application granted granted Critical
Publication of CN117618918B publication Critical patent/CN117618918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a virtual scene processing method, a virtual scene processing device, electronic equipment and a storage medium; the method comprises the following steps: carrying out data preprocessing on log data of a target object in a preset historical time period in a current virtual scene to obtain a target feature vector; determining basic cross features and basic regression features through a feature cross module and a feature regression module of a pre-trained data processing model, and determining target mapping features through a feature mapping module of the data processing model; determining historical office characteristics through a gate control circulation module of the data processing model; performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical contrast features through a feature fusion module of the data processing model to obtain model prediction probability; and determining an anti-churn processing strategy for the target object based on the model prediction probability. According to the method and the device, the loss of the target object in the virtual scene can be effectively prevented according to the accurate model prediction probability.

Description

Virtual scene processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a virtual scene processing method, device, electronic device, and storage medium.
Background
Most games are currently issued with a real problem: the acquisition cost is too high. In order to allow high cost available users to stay in the game for a longer period of time, it is important to build a user anti-churn system under the operating system in addition to the game itself being sufficiently resistant to play. The user anti-loss system predicts the possibility of future loss of the user mainly through the existing data in the game, gives a certain intra-exchange intervention means to the user with loss tendency, and reduces the loss tendency of the user.
However, for users who have a high tendency to lose and do not lose because of the intervention, if the cost of the intervention is high or the number of users who need the intervention is high, a certain amount of unfair competition will be caused, and the game experience of the normal users will be affected.
Disclosure of Invention
The embodiment of the application provides a virtual scene processing method, a virtual scene processing device, electronic equipment and a storage medium, which can effectively prevent loss of a target object in a virtual scene according to accurate model prediction probability.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a virtual scene processing method, which comprises the following steps: carrying out data preprocessing on log data of a target object in a preset historical time period in a current virtual scene to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix; respectively extracting basic features of the basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on the target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; performing historical feature extraction on the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model to obtain historical contrast features for representing comprehensive contrast performance of the target object in the preset historical time period; performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical contrast features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment; and determining an anti-churn processing strategy aiming at the target object in the current virtual scene based on the model prediction probability.
The embodiment of the application provides a virtual scene processing device, which comprises: the data preprocessing module is used for preprocessing the log data of the target object in the current virtual scene in a preset historical time period to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix; the feature processing module is used for respectively extracting basic features of the basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on the target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; the historical feature extraction module is used for extracting historical features of the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model to obtain historical contrast features used for representing the comprehensive contrast performance of the target object in the preset historical time period; the feature fusion processing module is used for carrying out feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical diagonal features through the feature fusion processing module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment; and the determining module is used for determining an anti-loss processing strategy aiming at the target object in the current virtual scene based on the model prediction probability.
In some embodiments, the log data includes base information of the target object and log data of a plurality of historical logs of the target object over the preset historical period; the history game feature matrix is used for representing the time sequence relation of the log data; the data preprocessing module is also used for: log screening is carried out on the log data in the preset historical time period to obtain the basic information and the log data; extracting features of the basic information to obtain the basic feature vector; extracting characteristics of each pair of data in the plurality of pairs of data, and correspondingly obtaining target log characteristics of each historical pair; and splicing the target log features corresponding to the multiple historical checking according to the sequence of the checking time of the multiple historical checking to obtain the historical checking feature matrix.
In some embodiments, the data preprocessing module is further configured to: extracting features of the basic information to obtain an initial feature vector; and carrying out feature division on the initial feature vector to obtain a plurality of basic feature vectors under different feature dimensions.
In some embodiments, the data preprocessing module is further configured to: under the condition that the target object is detected to be in a login state or a preliminary log-in state in the current virtual scene, basic log data and historical log data of the target object in the current virtual scene in a preset historical time period are obtained; extracting basic information of the target object from the basic log data; extracting a plurality of game data of the target object and the game time of each game data from the history game log data.
In some embodiments, the feature processing module is further to: averaging each column in the history contrast feature matrix through a feature mapping module of the pre-trained data processing model to obtain a corresponding average value of each column; constructing an average embedded feature vector based on the average corresponding to each column; and performing splicing processing on the basic feature vector and the mean embedded feature vector through at least one full connection layer of the feature mapping module to obtain the target mapping feature.
In some embodiments, each row of the historical interoffice feature matrix is used to characterize the interoffice data for a single historical interoffice at the same interoffice time; the history feature extraction module is further configured to: determining a hidden state obtained after updating each row of the historical office feature matrix; aiming at the t-th row characteristic of the history contrast characteristic matrix, acquiring a t-1 hidden state obtained after updating the t-1 th row characteristic of the history contrast characteristic matrix through the gating circulation module; determining an update gating output of an update gate of the gating cycle module on a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a reset gating output of a reset gate of the gating loop module at a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a hidden state obtained after the t row of the history contrast feature matrix is updated based on the updated gating output of the t row, the reset gating output of the t row and the t-1 hidden state; t is an integer greater than 1, and the maximum value of t is equal to the total number of rows of the history contrast feature matrix; and based on a preset activation function, activating the hidden state obtained after updating the last row of the history contrast feature matrix to obtain the history contrast feature.
In some embodiments, the determining module is further to: determining a prediction result for the target object in the current virtual scene based on the model prediction probability; when the predicted result is that the target object is in a loss state at the next moment, starting an anti-loss mode aiming at the target object in the current virtual scene; and when the prediction result is that the target object is in a non-loss state at the next moment, maintaining a normal game mode for the target object in the current virtual scene.
In some embodiments, the determining module is further to: determining that the predicted result is that the target object is in the loss state at the next moment under the condition that the model predicted probability is greater than or equal to a preset loss probability threshold value; and under the condition that the model prediction probability is smaller than the preset loss probability threshold value, determining that the prediction result is that the target object is in the non-loss state at the next moment.
In some embodiments, the apparatus further comprises a model training module for obtaining sample data; the sample data comprises sample log data and a prediction tag of the sample log data, wherein the sample log data is related log data aiming at least one sample target object; sample pretreatment is carried out on the sample log data to obtain sample target feature vectors, and the sample target feature vectors are input into a data processing model to be trained; the sample target feature vector comprises a sample basic feature vector and a sample history diagonal feature matrix; respectively extracting basic features of the sample basic feature vectors through a feature crossing module and a feature regression module of the data processing model to correspondingly obtain sample basic crossing features and sample basic regression features, and carrying out feature mapping on the sample target feature vectors through a feature mapping module of the data processing model to obtain sample target mapping features; carrying out historical feature extraction on the sample historical contrast feature matrix through a gating circulation module of the data processing model to obtain sample historical contrast features for representing sample comprehensive contrast performance of the sample target object in a preset sample historical time period; performing feature fusion processing on the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history diagonal features through a feature fusion module of the data processing model to obtain sample prediction probability; determining a loss result based on the sample prediction probability and the prediction tag; and updating model parameters in the data processing model based on the loss result to obtain the pre-trained data processing model.
In some embodiments, the sample log data includes sample base information of the sample target object and sample log data of a plurality of sample history logs of the sample target object over the preset sample history period; the sample history diagonal feature matrix is used for representing the time sequence relation of the sample log data; the model training module is further configured to: performing log screening on the sample log data aiming at each sample target object to obtain the sample basic information and the sample log data; extracting features of the sample basic information to obtain the sample basic feature vector; extracting characteristics of the office data of each sample in the office data of a plurality of samples, and correspondingly obtaining sample target log characteristics of each sample history office; and splicing the sample target log features corresponding to the plurality of sample historical checking according to the sequence of the checking time of the plurality of sample historical checking, so as to obtain the sample historical checking feature matrix.
An embodiment of the present application provides an electronic device, including: a memory for storing computer executable instructions; and the processor is used for realizing the virtual scene processing method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores computer executable instructions for implementing the virtual scene processing method provided by the embodiment of the application when being executed by a processor.
Embodiments of the present application provide a computer program product comprising computer-executable instructions stored in a computer-readable storage medium; the processor of the electronic device reads the computer executable instructions from the computer readable storage medium and executes the computer executable instructions to implement the virtual scene processing method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
firstly, performing data preprocessing on log data of a target object in a preset historical time period in a current virtual scene, and inputting basic feature vectors and historical diagonal feature matrices in target feature vectors obtained after the data preprocessing as models of a pre-trained data processing model; then, respectively extracting basic features of basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; historical feature extraction is carried out on the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model, so that historical contrast features used for representing comprehensive contrast performance of a target object in a preset historical time period are obtained; performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical game features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment; and finally, determining an anti-loss processing strategy aiming at the target object in the current virtual scene based on the model prediction probability. In this way, model prediction is carried out on a target feature vector of a target object in a preset historical time period under a current virtual scene through a pre-trained data processing model, a historical objective memory in a historical objective feature matrix is extracted by utilizing a gating circulation module of the data processing model, and then output features of a feature intersection module, a feature regression module, a feature mapping module and a gating circulation module are subjected to feature fusion, so that more accurate model prediction probability is obtained, the activity degree of the target object at the next moment is judged, and an anti-loss processing strategy is accurately formulated according to the activity degree, so that the loss of the target object caused by poor objective experience is avoided.
Drawings
Fig. 1 is a schematic structural diagram of a virtual scene processing system architecture according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a virtual scene processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an alternative virtual scene processing method according to an embodiment of the present disclosure;
fig. 4 is another alternative flow chart of the virtual scene processing method provided in the embodiment of the present application;
FIG. 5 is a schematic diagram of an implementation process for obtaining historical office features according to an embodiment of the present application;
fig. 6 is a schematic diagram of an implementation process of obtaining an anti-loss treatment policy according to an embodiment of the present application;
FIG. 7 is a flow chart of a training method for a pre-trained data processing model provided in an embodiment of the present application;
FIG. 8 is a schematic illustration of a game scenario for a multiplayer online tactical athletic game class provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a match provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of an architecture involved on the overall technical side provided by embodiments of the present application;
fig. 11 is a schematic diagram of a client reporting a service request to a server according to an embodiment of the present application;
fig. 12 is a schematic process diagram of protocol forwarding at a server side according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a request service portion of a combat uniform provided by an embodiment of the present application;
FIG. 14 is a flowchart of an anti-churn model for anti-churn service request according to an embodiment of the present application;
fig. 15 is a schematic diagram of a network structure of a model based on a log sequence GRU according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it will be appreciated; "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
If a similar description of "first/second" appears in the application document, the following description is added, in which the terms "first/second/third" are merely distinguishing between similar objects and not representing a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, if allowed, so that the embodiments of the application described herein may be implemented in an order other than that illustrated or described herein.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function, and works together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the present application have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the embodiments of the application is for the purpose of describing the embodiments of the application only and is not intended to be limiting of the application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) Gating cycle unit (Gate Recurrent Unit, GRU): the method is a kind of cyclic neural network (Rerrent Neural Network, RNN), and the flow of information is controlled through a gating mechanism, so that the modeling of long-term dependence of sequences is realized, and the problems that the RNN cannot memorize for a long time, and the gradient in back propagation can be solved.
In the related art, one approach only proposes anti-churn for new users, and is only used to predict churn or not churn, which is then compensated. However, there are naturally lost users in the game, and these users will not be lost by giving compensation. If too much compensation is given, such as the compensation competition mentioned in the proposal, compensation is wasted on the user who is not on hold. If the cost of compensation is relatively high, or the number of compensated users is relatively large, a certain amount of unfair competition can be caused, and the game experience of a normal player is affected. The other scheme is suitable for the advertisement pushing scene to adopt a pushing means, when the user is about to not log in a certain platform or not click on a certain platform page, interesting contents are pushed or consumption coupons are directly issued to reach the user, and the viscosity of the user is improved. However, the scheme simply determines the user type of the user by giving a static label, i.e. the user is terminated after being marked as being in the face of silence, and cannot be applied to the dynamic prediction process in the game scene.
Based on at least one of the above problems in the related art, the embodiments of the present application provide a causal inference-based churn prevention method for all active users, and an intra-game intervention means is adopted while guaranteeing a fair game play. Under the condition that the user is judged to have a loss tendency according to the model prediction result, and the user can be prevented from losing by adopting the intra-exchange intervention means, the intra-exchange intervention means is adopted for the user, namely, the user is given a certain compensation.
The following describes exemplary applications of the virtual scene processing device (i.e., electronic device) provided in the embodiments of the present application, where the device provided in the embodiments of the present application may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), a smart phone, a smart speaker, a smart watch, a smart television, a vehicle-mounted terminal, and other various types of user terminals capable of performing data processing or capable of performing virtual scene processing, and may also be implemented as a server. In the following, an exemplary application when the virtual scene processing apparatus is implemented as a server will be described.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an architecture of a virtual scene processing system 100 according to an embodiment of the present application, in order to support a virtual scene processing application, the virtual scene processing application is running on a terminal 400, and the terminal 400 is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400 is configured to send a virtual scene processing request to the server 200, where the server 200 forms a virtual scene processing device in the embodiment of the present application, and the server 200 is configured to perform data preprocessing on log data of a target object in a preset historical time period in a current virtual scene in response to the virtual scene processing request, so as to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix; respectively extracting basic features of basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; historical feature extraction is carried out on the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model, so that historical contrast features used for representing comprehensive contrast performance of a target object in a preset historical time period are obtained; performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical game features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment; and determining an anti-loss processing strategy for the target object in the current virtual scene based on the model prediction probability. After obtaining the anti-churn processing policy, the server 200 returns the anti-churn processing policy to the terminal 400, so as to output the anti-churn processing policy at the terminal 400 or continue the next service processing at the terminal 400 based on the anti-churn processing policy.
In some embodiments, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a car terminal, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 40 provided in an embodiment of the present application, where the electronic device 40 shown in fig. 2 may be a virtual scene processing device, and the virtual scene processing device includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in the virtual scene processing apparatus are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, a digital signal processor (Digital Signal Processor, DSP), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (Random Access Memory, RAM). The memory 450 described in the embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (Universal Serial Bus, USB), etc.; a presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430; an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows a virtual scene processing apparatus 455 stored in a memory 450, which may be software in the form of a program and a plug-in, and includes the following software modules: the data preprocessing module 4551, the feature processing module 4552, the history feature extraction module 4553, the feature fusion processing module 4554 and the determination module 4555 are logical, and thus may be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter.
In other embodiments, the apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the virtual scene processing method provided by the embodiments of the present application, e.g., the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processors (Digital Signal Processor, DSP), programmable logic devices (Programmable Logic Device, PLD), complex programmable logic devices (Complex Programmable Logic Device, CPLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), or other electronic components.
In some embodiments, the terminal or the server may implement the virtual scene processing method provided in the embodiments of the present application by running various computer executable instructions or computer programs. For example, the computer-executable instructions may be commands at the micro-program level, machine instructions, or software instructions. The computer program may be a native program or a software module in an operating system; the Application may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, or may be an applet that may be embedded in any APP, i.e. a program that only needs to be downloaded into a browser environment to run. In general, the computer-executable instructions may be any form of instructions and the computer program may be any form of application, module, or plug-in.
The virtual scene processing method provided by the embodiments of the present application may be executed by an electronic device, where the electronic device may be a server or a terminal, that is, the virtual scene processing method of the embodiments of the present application may be executed by the server or the terminal, or may be executed by interaction between the server and the terminal.
Referring to fig. 3, fig. 3 is a schematic flowchart of an alternative method for processing a virtual scene according to an embodiment of the present application, which will be described with reference to the steps shown in fig. 3, and the method includes the following steps S101 to S105, taking an execution subject of the virtual scene processing method as an example of a server:
step S101, carrying out data preprocessing on log data of a target object in a preset historical time period in a current virtual scene to obtain a target feature vector.
In this embodiment of the present application, the current virtual scene is a scene formed by three-dimensional modeling in which the current target object participates in the game experience, and may be, for example, a certain game scene. The target object is a participating user in the current virtual scene, the preset historical time period is a time period of acquiring how much log data is preset according to an actual model prediction result or an actual game situation of the target object, the log data comprises basic information of the target object and multiple historical game data of the target object in the preset historical time period, and different model prediction results are possibly caused by different acquired log data according to different preset historical time periods, namely model prediction probability affecting the output of a follow-up pre-trained data processing model. The basic information of the target object refers to information of different data types related to the target object in the current virtual scene, and the basic information comprises basic attributes, login information, acquired virtual resource information, game statistics information and the like of the target object. The historical data of the target object for multiple times in the preset historical time period are the historical data of multiple times of the current virtual scene, wherein the historical data of the target object are the historical data of multiple times of the current virtual scene, and the time point of the current virtual scene is closest to the current time point. Normally, the log data are counted in a list form and stored in a log table, and when the server performs virtual scene processing, the log data of a target object in the log table in a preset historical time period are automatically extracted. The data types and the number of data types contained in the basic information may be changed according to different virtual scene processing requirements, and are not particularly limited herein.
The target feature vector comprises a basic feature vector and a history contrast feature matrix, and can respectively perform data preprocessing on basic information and contrast data in log data to correspondingly obtain the basic feature vector and the history contrast feature matrix, and the basic feature vector and the history contrast feature matrix form the target feature vector together. For example, the basic feature vector includes basic attribute features, login features, acquired virtual resource features, and game statistics features, etc., corresponding to information of different data types contained in the basic information of the target object.
Here, the data preprocessing may be performing different data processing operations on the log data, for example, feature extraction, sampling, and other data processing operations, so that the target feature vector after the data preprocessing is used as a model input of the pre-trained data processing model, and the subsequent virtual scene processing process is continued.
Step S102, respectively extracting basic features of basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features.
In this embodiment, the pre-trained data processing model is a trained data processing model that is stored offline after a model training process of the data processing model to be trained, where the pre-trained data processing model includes a feature intersection module, a feature regression module, a feature mapping module, and a gating circulation module. Inputting the data to be predicted into the pre-trained data processing model, sequentially passing through the data processing processes of the different modules in the pre-trained data processing model, and finally obtaining the model prediction probability of the target object for the activity degree of the current virtual scene at the next moment after the current moment, wherein the data to be predicted can be the target feature vector obtained by preprocessing the log data.
And extracting basic features of the basic feature vectors in the target feature vectors through a feature crossing module, so as to obtain basic crossing features, wherein the feature crossing module can be a factor decomposition machine, and performs feature crossing and combination on a plurality of different types of features contained in the basic feature vectors through a basic feature extraction process so as to obtain basic crossing features integrating a plurality of different types of features. Here, the basic cross features include association information between different types of features, and each type of feature is not directly modeled, so that a model prediction effect can be improved by constructing a feature combination mode of a new cross feature (i.e., the basic cross feature).
And extracting basic features of the basic feature vectors in the target feature vectors through a feature regression module to obtain basic regression features, wherein the feature regression module can be a logistic regression layer, and performs feature mapping on various different types of features contained in the basic feature vectors through a basic feature extraction process to obtain basic regression features. Here, the feature regression module is simple to implement, and can quickly model each type of feature in the basic feature vector, so that the model operation rate is improved.
And performing feature mapping on the target feature vector through a feature mapping module, namely performing feature mapping on the basic feature vector and the historical diagonal feature matrix in the target feature vector at the same time to obtain target mapping features after feature mapping. For example, the feature mapping module may be a full-connection layer, the number of layers of the full-connection layer may be multiple layers, and the number of layers of the full-connection layer may be changed according to the actual model prediction result. The historical contrast characteristic matrix comprises contrast characteristics of multiple historical contrast, and the contrast characteristics of each historical contrast form one row of the historical contrast characteristic matrix. Firstly, each column of the history contrast feature matrix is averaged to obtain average contrast features of multiple history contrast, and then the basic feature vector and the average contrast features are spliced to obtain target mapping features. The feature mapping module is used for comprehensively modeling each type of feature in the basic feature vector and the historical diagonal feature matrix, extracting and integrating useful information in the basic feature vector and the historical diagonal feature matrix, and improving the model prediction capability.
Step S103, historical feature extraction is carried out on the historical contrast feature matrix through a gate control circulation module of the pre-trained data processing model, and the historical contrast features used for representing the comprehensive contrast performance of the target object in a preset historical time period are obtained.
In the embodiment of the application, the history contrast characteristic is used for representing the comprehensive contrast performance of the target object in the preset history time period, the comprehensive contrast performance is the comprehensive contrast data extracted from the contrast data of the target object in the preset history time period for a plurality of times, and the contrast experience of the target object in the preset history time period can be summarized according to the comprehensive contrast performance. And performing historical feature extraction on the historical game feature matrix after data preprocessing through a gating circulation module to obtain historical game features after the historical feature extraction. For example, the gating cycle module may be constituted by a multi-layered gating cycle unit GRU. Here, the gating cycle module with the time sequence characteristic can capture the long-term dependency relationship in the sequence data, the history contrast characteristic matrix is composed of contrast data corresponding characteristics of a plurality of times of history contrast, the history contrast characteristic matrix belongs to the sequence data if the contrast time of the plurality of times of history contrast has a sequence, the gating cycle module with the time sequence characteristic can effectively learn the relativity between each row of characteristics in the history contrast characteristic matrix, and the learning performance of the module is improved.
Step S104, performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical contrast features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object for the current virtual scene at the next moment after the current moment.
In this embodiment of the present application, the model prediction probability is used to characterize the activity level of the target object for the current virtual scene at the next time after the current time, where the model prediction probability is an arbitrary value between 0 and 1. The model prediction probability is inversely proportional to the activity degree, and the higher the model prediction probability is, the lower the activity degree of the target object is, and the higher the loss probability of the target object is; the smaller the model prediction probability, the higher the activity of the target object, and the lower the loss probability of the target object.
The basic cross features and the basic regression features after basic feature extraction, the target mapping features after feature mapping and the history contrast features after history feature extraction are used as input features of the feature fusion module together through a feature fusion module, feature fusion processing is carried out on the basic cross features, the basic regression features, the target mapping features and the history contrast features through a feature fusion processing function of the feature fusion module, and finally model prediction probability of a pre-trained data processing model is output. For example, the feature fusion module may be constituted by a Concat layer. Here, the feature fusion module is used for connecting the different feature tensors, so that joint modeling of the basic cross feature, the basic regression feature, the target mapping feature and the historical contrast feature is realized, namely, different feature information in the basic cross feature, the basic regression feature, the target mapping feature and the historical contrast feature is captured to help the model to better understand data, so that the expressive capacity of the model is improved, and the accurate model prediction probability is obtained.
Step S105, determining an anti-erosion processing strategy for the target object in the current virtual scene based on the model prediction probability.
In this embodiment of the present application, the anti-loss processing policy is used to change a contrast mode of the target object in the current virtual scene, where the contrast mode includes an anti-loss mode and a normal contrast mode, and correspondingly, the anti-loss processing policy includes starting the anti-loss mode for the target object or maintaining the normal contrast mode in the current virtual scene. According to the size of the model prediction probability, the state of the target object at the next moment in the current virtual scene can be judged, namely whether the target object is in a loss state or a non-loss state at the next moment. Determining a corresponding anti-loss processing strategy according to the state of the target object in the current virtual scene at the next moment, namely starting an anti-loss mode for the target object in the current virtual scene when the target object is in a loss state at the next moment, so that the game experience of the target object is improved, and the viscosity of the target object in the current virtual scene is increased; when the target object is in a non-loss state at the next moment, a normal contrast mode is kept for the target object in the current virtual scene, and normal contrast experience of the target object is ensured.
According to the virtual scene processing method, firstly, data preprocessing is carried out on log data of a target object in a preset historical time period in a current virtual scene, and a basic feature vector and a historical diagonal feature matrix in target feature vectors obtained after the data preprocessing are used as model input of a pre-trained data processing model; then, respectively extracting basic features of basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; historical feature extraction is carried out on the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model, so that historical contrast features used for representing comprehensive contrast performance of a target object in a preset historical time period are obtained; and carrying out feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical game features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment. And finally, determining an anti-loss processing strategy aiming at the target object in the current virtual scene based on the model prediction probability. In this way, model prediction is carried out on a target feature vector of a target object in a preset historical time period under a current virtual scene through a pre-trained data processing model, a historical objective memory in a historical objective feature matrix is extracted by utilizing a gating circulation module of the data processing model, and then output features of a feature intersection module, a feature regression module, a feature mapping module and a gating circulation module are subjected to feature fusion, so that more accurate model prediction probability is obtained, the activity degree of the target object at the next moment is judged, and an anti-loss processing strategy is accurately formulated according to the activity degree, so that the loss of the target object caused by poor objective experience is avoided.
The virtual scene processing method in the embodiment of the present application will be described below in connection with interaction between a terminal and a server in a virtual scene processing system. It should be noted that, the virtual scene processing method is a virtual scene processing method implemented by interaction between a terminal and a server, which is substantially the same as the virtual scene processing method executed by the server in the above embodiment, except that actions executed by the terminal during execution of the virtual scene processing method are also described in the embodiment of the present application, and some steps may be executed by the terminal or the server, so, for the steps in the embodiment that are the same as those in the embodiment but different from each other in executing the subject, the embodiment is only an exemplary illustration, and may be executed by any executing subject in the implementation process, which is not limited by the embodiment of the present application.
Fig. 4 is another optional flowchart of a virtual scene processing method according to an embodiment of the present application, as shown in fig. 4, the method includes the following steps S201 to S213:
in step S201, the terminal receives a virtual scene processing operation input by a user.
In this embodiment of the present application, a user may input a virtual scene processing operation at a client of a virtual scene processing application, in the virtual scene processing application, a virtual scene processing function may be provided, and a user (may be a virtual scene processing designer) may input the virtual scene processing operation at the virtual scene processing function page to trigger a virtual scene processing request.
In some embodiments, when the user inputs the virtual scene processing operation, the user may also input the log table of the target object in the current virtual scene or log data in a preset historical period at the same time, and when the terminal receives the log table or log data, a virtual scene processing window is popped up and confirmed on a virtual scene processing function page, and after the terminal detects that the user clicks a virtual scene processing button, further model prediction processing is performed on the log table or log data, so as to determine an anti-loss processing policy for the target object in the current virtual scene. Or in other embodiments, the user may directly input the log table or log data in the virtual scene processing function page, and the terminal may directly trigger the virtual scene processing function after receiving the log data, and perform further model prediction processing on the log data to determine the anti-loss processing policy for the target object in the current virtual scene.
In step S202, the terminal generates a virtual scene processing request in response to the virtual scene processing operation.
In the embodiment of the application, the data input by the user can be encapsulated into the virtual scene processing request. For example, in a display interface of the virtual scene processing application, all log data in a log table of a target object in a current virtual scene are displayed, a user can perform data selection or data sampling according to actual requirements to obtain log data in a preset historical time period, and then the log data input by the user can be packaged into a virtual scene processing request, or a pre-trained data processing model input by the user is packaged into the virtual scene processing request.
In step S203, the terminal sends a virtual scene processing request to the server.
In step S204, the server performs data preprocessing on log data of the target object in the preset historical time period in the current virtual scene in response to the virtual scene processing request, to obtain a target feature vector.
In this embodiment of the present application, if log data of a target object in a current virtual scene in a preset historical time period is encapsulated in the virtual scene processing request, the log data of the target object in the current virtual scene in the preset historical time period may be directly obtained by parsing.
In the embodiment of the application, the log data comprises basic information of the target object and the log data of multiple historical log-on of the target object in a preset historical time period, and the target feature vector comprises a basic feature vector and a historical log-on feature matrix. The basic information of the target object may be basic properties of the target object, such as a segment position, a level, a custom game character, etc. of the target object; the target object may be active data of the game, such as the latest login times, login time, etc. in the virtual scene; the virtual resource information acquired by the target object in the current virtual scene can also be, for example, the grade of the virtual resource acquired by the target object, the frequency of the virtual resource acquired by the target object and the like; but may also be the game statistics of the target object, such as the number of times the target object knocks down an opponent on a certain day or in a certain field, the number of knockdown times, the time of the game, etc. The data of the target object in the multiple historical matches within the preset historical time period is the data of the target object in the latest multiple historical matches, such as the interval duration of two adjacent historical matches, the match duration of each historical match, or the match win-lose of the match, etc.
In some embodiments, the specific process of performing data preprocessing on log data of a target object in a preset historical time period in a current virtual scene is as follows: log screening is carried out on log data in a preset historical time period to obtain basic information and office data; extracting features of the basic information to obtain basic feature vectors; extracting characteristics of each pair of data in the plurality of pairs of data, and correspondingly obtaining target log characteristics of each historical pair; and splicing the target log features corresponding to the multiple historical checking according to the sequence of the checking time of the multiple historical checking to obtain a historical checking feature matrix. That is, in general, the log data is stored in a log table, which stores a large amount of data information related to the target object in the current virtual scene, and when the virtual scene processing is performed, the log data necessary for the virtual scene processing is first filtered out from the log table through a log filtering process, so as to obtain basic information and log data in the log data. The history contrast characteristic matrix is used for representing the time sequence relation of log data, and a plurality of contrast data correspond to a plurality of times of history contrast of the target object in a preset history time period. Correspondingly, the target object has a sequence of the game time corresponding to the game data of the multiple historical games in the preset historical time period. And respectively extracting the characteristics of the basic information of the target object and each pair of data in the plurality of pairs of data in an off-line task calculation mode, and correspondingly obtaining basic characteristic vectors after the characteristic extraction and target log characteristics of each historical pair, namely target log characteristics corresponding to multiple historical pairs. And for the target log features corresponding to the multiple historical checking, splicing the target log features corresponding to the multiple historical checking according to the sequence of the checking time of the multiple historical checking to obtain a historical checking feature matrix for representing the time sequence relation of the log data. The basic feature vector after data preprocessing and the history contrast feature matrix jointly form a target feature vector. Here, the target feature vector after the data preprocessing is used as the model input of the pre-trained data processing model through the feature extraction process of the basic information of the target object and each pair of data in the plurality of pairs of data, and is suitable for the model prediction process of the follow-up pre-trained data processing model.
In some embodiments, the specific process of extracting the features of the basic information to obtain the basic feature vector is as follows: extracting features of the basic information to obtain an initial feature vector; and carrying out feature division on the initial feature vector to obtain a plurality of basic feature vectors under different feature dimensions. That is, through the feature extraction process, the basic information is converted into the initial feature vector, and then the feature division is performed on the initial feature vector, the feature division can be performed by using the embedded layer, and the feature division mode can be equidistant division or equal frequency division. For example, the basic information contains the level of the target object, and the initial feature vector corresponding to the level 1 to 10 can be divided into the basic feature vector represented as 0000 by equally dividing, and the initial feature vector corresponding to the level 11 to 20 can be divided into the basic feature vector represented as 0001. The equal frequency division can be divided according to the number of target objects participating in virtual scene processing, and different number of segments are divided into different basic feature vectors. And obtaining basic feature vectors under a plurality of different feature dimensions according to different data types contained in the basic information, wherein the data types are the same as the number of the feature dimensions.
In some embodiments, the specific process of extracting the features of each pair of the plurality of pairs of data and correspondingly obtaining the target log features of each historical pair is as follows: extracting features of each pair of data in the plurality of pairs of data to obtain a pair feature vector; and carrying out feature division on the feature vector of the game to obtain the target log feature of each historical game. The data of the same history of the game can be spliced together to form the data of each history of the game. The feature extraction and feature division process of each of the plurality of pieces of game data is similar to the feature extraction and feature division process of the basic information described above.
In some embodiments, log screening is performed on log data in a preset historical time period, and specific processes for obtaining basic information and log data are as follows: under the condition that the target object is detected to be in a login state or a preliminary log-in state in the current virtual scene, basic log data and historical log data of the target object in the current virtual scene in a preset historical time period are obtained; extracting basic information of a target object from basic log data; a plurality of pair data of the target object and a pair time of each pair data are extracted from the history pair log data. That is, the basic log data and the history log data related to the target object are generally statistically stored in the log table in the form of a list, and in the current virtual scenario, when the target object is in a login state or a standby state, the server automatically acquires the basic log data and the history log data of the target object in a preset history period by means of table lookup. The basic log data comprises basic information of a target object, the historical log data comprises a plurality of log data of the target object and the log time of each log data, the basic information of the target object can be extracted from the basic log data, namely, the basic information of the target object is obtained by supplementing the basic log data of the target object in a current virtual scene for a plurality of times, and the complete basic information of the target object can be extracted from the historical log data. Determining the sequence of the counter time of the multiple historical counter through the multiple historical counter data of the target object and the counter time of each counter data, so that a historical counter feature matrix can be obtained according to the sequence of the counter time of the multiple historical counter later, and then the correlation of the features among the multiple historical counter feature matrices in the historical counter feature matrix is captured by utilizing the time sequence of the pre-trained data processing model, so that the accuracy of model prediction probability is improved.
In step S205, the server performs basic feature extraction on the basic feature vectors through a feature intersection module and a feature regression module of the pre-trained data processing model, so as to obtain basic intersection features and basic regression features correspondingly.
In this embodiment of the present application, for the basic feature vector, in order to learn the feature representation with good generalization capability, the basic feature vector may be extracted by using a feature intersection module and a feature regression module with good feature extraction capability, for example, the feature intersection module and the feature regression module may be a factorizer and a logistic regression layer, respectively.
Step S206, the server averages each column in the history contrast feature matrix through the feature mapping module of the pre-trained data processing model to obtain the average value corresponding to each column.
In the embodiment of the application, each column in the history match feature matrix represents the same type of features in the plurality of times of history match, each column in the history match feature matrix is averaged to obtain a corresponding average value of each column, and the average value of the same type of features in the plurality of times of history match is obtained.
In step S207, the server constructs a mean embedded feature vector based on the mean corresponding to each column.
In the embodiment of the application, the average value corresponding to each column is integrated to obtain the average value embedded feature vector, namely the feature vector formed after the average values of the same type of features in the historical pair of multiple times are integrated is obtained.
Step S208, the server performs splicing processing on the basic feature vector and the mean embedded feature vector through at least one full connection layer of the feature mapping module to obtain target mapping features.
In the embodiment of the application, the feature mapping module comprises a plurality of full-connection layers, and the base feature vector and the mean embedded feature vector are spliced through the full-connection layers to obtain the spliced target mapping feature.
The feature mapping module is used for comprehensively modeling each type of feature in the basic feature vector and the historical diagonal feature matrix, extracting and integrating useful information in the basic feature vector and the historical diagonal feature matrix, and improving the model prediction capability.
In step S209, the server performs historical feature extraction on the historical feature matrix through a gate control loop module of the pre-trained data processing model, so as to obtain the historical feature for characterizing the comprehensive feature of the target object in a preset historical time period.
In the embodiment of the present application, each row of the history contrast feature matrix is used to represent the contrast data of one history contrast under the same contrast time.
In some embodiments, referring to fig. 5, fig. 5 shows that in step S209, the server performs, through a gate-control loop module of the pre-trained data processing model, historical feature extraction on the historical feature matrix to obtain historical feature for characterizing comprehensive contrast performance of the target object in a preset historical period, and the method may be implemented by the following steps S2091 to S2092:
in step S2091, the hidden status obtained after updating each row of the history-feature matrix is determined.
In the embodiment of the application, aiming at the t-th row characteristic of the historical office feature matrix, a t-1 hidden state obtained after updating the t-1 th row characteristic of the historical office feature matrix through a gate control circulation module is obtained; based on the t-th row characteristic and the t-1 hidden state, determining the update gating output of the update gate of the gating circulation module on the t-th row of the history contrast characteristic matrix; based on the t-th row characteristic and the t-1 hidden state, determining reset gating output of a reset gate of the gating cycle module on the t-th row of the history diagonal characteristic matrix; and determining a hidden state obtained after updating the t row of the history contrast characteristic matrix based on the updated gate control output of the t row, the reset gate control output of the t row and the t-1 hidden state, wherein t is an integer greater than 1, and the maximum value of t is equal to the total number of rows of the history contrast characteristic matrix.
The history game feature matrix comprises N rows of features, each row of features represents target log features corresponding to each history game in the plurality of times of history games, and the game time corresponding to each row of features has a sequence. Aiming at the t-line characteristic of the historical office feature matrix, firstly, acquiring a t-1 hidden state corresponding to the t-1-line characteristic, multiplying the t-line characteristic and the t-1 hidden state with corresponding updated gate weights respectively, adding two multiplied values, and carrying out activation processing on the added value by an updated gate activation function to obtain updated gate control output of an updated gate of a gate control circulation module in the t-line of the historical office feature matrix; multiplying the t-th row characteristic and the t-1 hidden state with corresponding reset gate weights respectively, adding two multiplied values, and performing activation processing on the added values by a reset gate activation function to obtain reset gate control output of the reset gate of the gate control circulation module in the t-th row of the history diagonal characteristic matrix; calculating to obtain a candidate hidden state of the t line according to the reset gating output of the t line and the t-1 hidden state, and then making a difference between 1 and the update gating output to obtain an update difference value; and adding the value obtained by multiplying the candidate hidden state of the t line by the updated difference value according to the elements and the value obtained by multiplying the t-1 hidden state by the updated gating output according to the elements to obtain the hidden state obtained by updating the t line of the history contrast feature matrix.
Step S2092, based on the preset activation function, performs activation processing on the hidden state obtained after updating the last row of the history feature matrix, to obtain the history feature.
In the embodiment of the present application, when the value of t is N, the hidden state obtained after updating the last row of the history-to-office feature matrix may be determined. The preset activation function is used for activating the hidden state, for example, the preset activation function may be a cross entropy loss function. And multiplying the hidden state by a preset weight, and then activating the multiplied value to obtain the history contrast characteristic.
Here, the flow of the feature information is controlled through the update gate and the reset gate in the gating circulation module, the feature information of the current moment in the history checking feature matrix is related to the feature information of the previous moment, the dependency relationship with larger time step distance in the time sequence is better captured, and the comprehensive checking condition of multiple history checking is effectively learned.
In step S210, the server performs feature fusion processing on the basic cross feature, the basic regression feature, the target mapping feature and the historical contrast feature through a feature fusion module of the pre-trained data processing model, so as to obtain a model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment.
In the embodiment of the present application, the feature fusion processing procedure of the feature fusion module is to combine features of the basic cross feature, the basic regression feature, the target mapping feature and the historical contrast feature by means of stitching, and obtain a result after fusion processing by fusing different feature information contained in the basic cross feature, the basic regression feature, the target mapping feature and the historical contrast feature, so as to obtain a model prediction probability for representing the activity degree of the target object for the current virtual scene at the next moment after the current moment. The feature fusion module may be a Concat layer, or may be another different neural network layer, which is not limited herein. The pre-trained data processing model is helped to better understand the input data through the feature fusion processing process, so that the prediction accuracy of the model is improved.
Step S211, the server determines an anti-erosion processing strategy for the target object in the current virtual scene based on the model prediction probability.
In this embodiment of the present application, the anti-churn processing policy includes starting an anti-churn mode or maintaining a normal game mode for a target object in a current virtual scene. And determining whether an anti-loss mode is started or a normal game mode is kept for the target object in the current virtual scene according to the size of the model prediction probability.
In some embodiments, referring to fig. 6, fig. 6 shows that in step S211, the server determines, based on the model prediction probability, an anti-churn processing policy for the target object in the current virtual scene, which may be implemented by the following steps S2111 to S2113:
step S2111, based on the model prediction probability, determines a prediction result for the target object in the current virtual scene.
In the embodiment of the application, under the condition that the model prediction probability is greater than or equal to the preset loss probability threshold, determining that the prediction result is that the target object is in a loss state at the next moment; and under the condition that the model prediction probability is smaller than a preset loss probability threshold value, determining that the prediction result is that the target object is in a non-loss state at the next moment. The preset loss probability threshold is a probability threshold for judging whether the target object is in a loss state at the next moment in the current virtual scene, and the prediction result comprises that the target object is in the loss state at the next moment and the target object is in a non-loss state at the next moment. If the model prediction probability is greater than or equal to a preset loss probability threshold, judging that the possibility of loss of the target object at the next moment is high, namely, the prediction result is that the target object is in a loss state at the next moment; if the model prediction probability is smaller than the preset loss probability threshold, the probability that the target object is lost at the next moment is judged to be not great, namely the prediction result is that the target object is in a non-loss state at the next moment. For example, the preset attrition probability threshold may be set to 0.5 or 0.7, and when the model prediction probability is 0.3, the prediction result for the target object in the current virtual scene may be determined to be in a non-attrition state at the next time according to the preset attrition probability threshold; when the model prediction probability is 0.8, determining that the prediction result of the target object in the current virtual scene is in the loss state at the next moment according to the preset loss probability threshold.
In step S2112, when the predicted result is that the target object is in the loss state at the next moment, the anti-loss mode is started for the target object in the current virtual scene.
In the embodiment of the application, the anti-loss mode is used for adopting a certain contrast intervention means for the target object in the current virtual scene to improve the activity degree of the target object and prevent the loss of the target object at the next moment. The game intervention means is a means for intervening in the game experience degree of the target object in the current virtual scene, for example, the game intervention means can be used for matching teammates with stronger strength for the target object in the current virtual scene or matching opponents with weaker strength, or can be used for giving a certain prop compensation for the target object in the current virtual scene, so that the game experience degree of the target object is improved.
In step S2113, when the prediction result is that the target object is in the non-attrition state at the next time, the normal game mode is maintained for the target object in the current virtual scene.
In the embodiment of the application, the normal contrast mode is used for not taking contrast intervention means for the target object in the current virtual scene, so that normal contrast experience of the target object at the next moment is ensured. For example, the target object is normally matched to a corresponding opponent of the flagdrum under the current virtual scene.
Here, by setting the preset loss probability threshold, based on the accurate model prediction probability, the loss prevention processing strategy for the target object can be correctly determined in the current virtual scene, and the contrast experience degree of the target object is improved, so that the loss of the target object at the next moment is effectively prevented, and the user viscosity of the current virtual scene is improved.
And step S212, the server sends the anti-attrition processing strategy for the target object in the current virtual scene to the terminal.
In step S213, the server outputs the anti-churn processing policy for the target object in the current virtual scene.
In the embodiment of the application, when virtual scene processing is performed through a pre-trained data processing model, firstly, data preprocessing is performed on log data of a target object in a preset historical time period under a current virtual scene to obtain a target feature vector; and then, taking the target feature vector as a model input of a pre-trained data processing model, and respectively carrying out basic feature extraction, historical feature extraction and feature fusion processing on a basic feature vector and a historical diagonal feature matrix in the target feature vector through a feature crossing module, a feature regression module, a gating circulation module and a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment. And finally, determining an anti-loss processing strategy aiming at the target object in the current virtual scene based on the model prediction probability. In this way, the historical feature extraction process of the historical contrast feature matrix containing the contrast information of multiple historical contrast by the gating circulation module can sense the sensitivity degree of the target object to the historical contrast, acquire the historical contrast memory of the target object, and help the pre-trained data processing model to better understand the input data by the feature fusion processing process of the feature fusion module, so that the prediction precision of the model prediction probability is improved, and further the loss of the target object in a virtual scene is effectively prevented according to the accurate model prediction probability.
In some embodiments, the above-mentioned virtual scene processing method may be implemented by a pre-trained data processing model, referring to fig. 7, fig. 7 shows a flow chart of a pre-trained data processing model training method provided by an embodiment of the present application, where the pre-trained data processing model training method of the embodiment of the present application may be implemented by a model training module, and the model training module may be a module in an electronic device for implementing the virtual scene processing method, that is, the pre-trained data processing model training method may be implemented by a terminal or a server; of course, the model training module may also be a module in another electronic device different from the electronic device for implementing the virtual scene processing method, i.e. the pre-trained data processing model training method may be performed by another terminal or another server. As shown in fig. 7, the pre-trained data processing model is trained by the following steps S301 to S307:
in step S301, sample data is acquired.
In this embodiment of the present application, the sample data includes sample log data and a prediction tag of the sample log data, where the sample log data is related log data for at least one sample target object. The prediction label of the sample log data is used for predicting whether the sample target object will run off at the next moment, namely, predicting whether the sample target object is in a run-off state at the next moment, wherein the value is 0 or 1,0 represents a non-run-off state, and 1 represents a run-off state.
Step S302, sample preprocessing is carried out on the sample log data to obtain sample target feature vectors, and the sample target feature vectors are input into a data processing model to be trained.
In the embodiment of the present application, the data processing model to be trained is a network model when the pre-trained data processing model is not subjected to model training, that is, the data processing model to be trained obtains the pre-trained data processing model after being subjected to model training, and the data processing model to be trained and the pre-trained data processing model have the same model structure. The data processing model to be trained comprises a feature crossing module, a feature regression module, a feature mapping module, a gating circulation module and a feature fusion module, and the sample target feature vector after sample pretreatment is the model input of the data processing model to be trained.
In the embodiment of the application, the specific process of sample preprocessing of the sample log data is as follows: aiming at each sample target object, carrying out log screening on sample log data to obtain sample basic information and sample office data; extracting features of the sample basic information to obtain a sample basic feature vector; extracting characteristics of the office data of each sample in the office data of a plurality of samples, and correspondingly obtaining sample target log characteristics of each sample history office; and splicing the sample target log features corresponding to the repeated sample historical checking according to the sequence of the checking time of the repeated sample historical checking, so as to obtain a sample historical checking feature matrix. The sample target feature vector comprises a sample basic feature vector and a sample history diagonal feature matrix; the sample log data comprises sample basic information of a sample target object and sample log data of multiple sample history logs of the sample target object in a preset sample history time period; the sample history versus office feature matrix is used to characterize the timing relationship of the sample log data.
Step S303, respectively extracting basic features of the sample basic feature vectors through a feature crossing module and a feature regression module of the data processing model to be trained to correspondingly obtain the sample basic crossing features and the sample basic regression features, and carrying out feature mapping on the sample target feature vectors through a feature mapping module of the data processing model to be trained to obtain sample target mapping features.
Step S304, carrying out historical feature extraction on the sample historical contrast feature matrix through a gating circulation module of the data processing model to be trained to obtain sample historical contrast features for representing sample comprehensive contrast performance of a sample target object in a preset sample historical time period.
Step S305, carrying out feature fusion processing on the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history contrast features through a feature fusion module of the data processing model to be trained, so as to obtain sample prediction probability.
In the embodiment of the present application, the sample prediction probability is a probability value between 0 and 1. The feature fusion module of the data processing model to be trained comprises a feature fusion layer, the feature fusion module is used for splicing and fusing the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history contrast features, and different feature information in the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history contrast features is fused to obtain a fused result, namely the sample prediction probability output by the data processing model to be trained. The accurate sample prediction probability can be obtained through the feature fusion module of the data processing model to be trained, so that the pre-loss probability of the sample target object is accurate.
Step S306, determining a loss result based on the sample prediction probability and the prediction label.
In the embodiment of the present application, a loss function of the loss model is constructed based on the sample prediction probability and the prediction label, and loss calculation is performed to obtain a loss result, for example, the loss function may be a cross entropy loss function. The sample prediction probability of the model and the inconsistency degree of the prediction labels are measured through the loss result, namely, the difference between the forward calculation result (i.e. the sample prediction probability) and the true value (i.e. the prediction labels) of each iteration of the model is calculated, so that the next training is guided to be carried out in the correct direction.
Step S307, updating model parameters in the data processing model based on the loss result to obtain a pre-trained data processing model.
In the embodiment of the present application, according to the derivative of the loss function, the loss result is returned along the gradient minimum direction, and model parameters in the feature intersection module, the feature regression module, the feature mapping module, the gating circulation module and the feature fusion module of the data processing model to be trained, such as weight values in the feature intersection module, the feature regression module, the feature mapping module, the gating circulation module and the feature fusion module, are updated. Presetting a loss result threshold, and stopping iterative training when the loss result is smaller than the preset loss result threshold, namely stopping model parameter updating; a maximum iteration number threshold value can be preset, and when the iteration number exceeds the maximum iteration number threshold value, model parameter updating is stopped; and (3) presetting a cut-off iteration time, and stopping updating the model parameters when the iteration time reaches the cut-off iteration time to obtain the pre-trained data processing model. The model training process of the pre-trained data processing model is an offline training process, the trained pre-trained data processing model is stored offline, and when the pre-trained data processing model after offline storage can be used for virtual scene processing, whether a target object is in a loss state at the next moment is predicted in real time.
In the embodiment of the application, the target feature vector subjected to sample pretreatment is input into the data processing model to be trained, the sample prediction probability after feature fusion treatment is obtained through the feature intersection module, the feature regression module, the gating circulation module and the feature fusion module in the data processing model to be trained, the optimal pre-trained data processing model is obtained through training based on the loss calculation between the sample prediction probability and the prediction label, the optimal sample prediction probability is obtained by using the pre-trained data processing model, and further the effective loss prevention of the sample target object is realized based on the sample prediction probability, and the local experience degree of the sample target object under a virtual scene is improved.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
The embodiment of the application provides a virtual scene processing method, which relates to an anti-loss method for active users in a game. According to the method, a historical match sequence (namely the target object) of an active user (namely the match data) is modeled in a key mode, sequence features (namely the historical match feature matrix) in the historical match of the active user are added into a GRU (namely the gating circulation module) for training, the historical match memory of the active user is obtained, and whether the active user runs off is predicted. The method can sense the sensitivity of the active user to the historical game, and after the past bad historical game is played, some game rewards are timely issued to the active user, so that the loss of the active user caused by the bad game experience is avoided. The method can improve game experience, improve user viscosity and directly prevent loss of users.
The application scenario is mainly a single/multi-player ranking/matching scenario in a game, as shown in fig. 8, and fig. 8 shows a multi-player online tactical competition game (Multiplayer Online Battle Arena, MOBA) type of game play scenario, and multiple game play scenarios such as matching, man-machine, ranking, etc. After the clicks begin to match, the user will match up to 4 game teammates, as shown in FIG. 9, and the opponent will also have 5 people to play together. In the system, the user is considered to be matched with the opponent corresponding to the normal flagpole according to the loss condition of the user, or the user is compensated for a great probability of winning the opponent (the compensation is in various forms and is not discussed in detail).
The architecture related to the overall technical side is shown in the flow chart of fig. 10, and for an APP that can run on the client 600 at one end trip, the APP occupies about 1G of storage, and the network can support running above 2G. The matching algorithm is deployed at the cloud end, after the user clicks on the APP of the client 600 to match, as shown in fig. 11, the client 600 reports a service request to the server, where the service request is a prediction result returned by the server of the cloud end 601.
At the server side, only data required by matching is transmitted, and a log of successful matching is reported. The overall process is generally as shown in fig. 12. The server is responsible for protocol forwarding of data transmitted by the gateway, the hall is a game landing page, a mall and the like, and if people in different areas (even different countries) are matched together, the server can achieve that the hall is public. The load management garment may minimize load management for different garments, if there are additional requirements (e.g., minimal garment that may be isolated for sub-shipping versions). The combat uniform is strongly correlated with the game, one for each play area, and finally data is transferred by protocol.
In the hall suit and the combat suit at the server side, different services are requested by forwarding proxy according to the requirements, such as a mall recommended service is requested at the hall suit; friend recommendation and stranger recommendation services requested by an opening page in a hall; service in the environment of the game is requested at the combat uniform, etc. A schematic diagram of the service requesting portion of the combat uniform is shown in fig. 13.
The request for combat uniform will typically be forwarded to a service in the environment of the office. Wherein the game environment comprises three parts: a match against office service, an anti-churn service, and a judge against office service. The matching service, namely, the opening matching room, divides the users into different rooms for 5v5 game play through the mature ELO algorithm of MOBA games and the like. The anti-loss service, namely the man-machine anti-loss system, compensates the user about to lose, and avoids the user from encountering extremely bad exchange experience. The checking service, i.e. the checking system, provides an entrance for users to submit unsatisfactory checking by questionnaires or a negative feedback system after checking, and then deducts through AI automatic checking.
The anti-churn model based on the log sequence GRU related by the scheme is part of anti-churn service. And the predicted result of the loss prevention model of the service request is judged through a threshold value, and compensation is not required to be put into the user, so that bad experience of the user is avoided. The flow of the anti-attrition service request anti-attrition model is shown in fig. 14.
The anti-loss model of the anti-loss service request belongs to an online process, model prediction is carried out by using the trained anti-loss model in the offline process of the anti-loss service request, a prediction result is obtained, and a prediction result log is realized. The prediction result is not compensated for the return of the normal user, namely the normal user is unchanged, and compensation is required for the return of the lost user, namely the lost user. After the prediction result is returned to the anti-loss service, the anti-loss service is returned to the proxy for delivery, the proxy returns the prediction result to the server, and finally, the server returns to the client and displays the result to the user.
The log of the prediction result in fig. 14 is a TLOG table (i.e., the log table) of the offline, and all relevant information (i.e., the log data) of the office is recorded in the table. The different features (i.e. the above basic feature vector and the history-to-office feature matrix) are listed here: a. log-in times of 1/7/14/30 days in the past for the active feature of the office; b. user base attributes; c. the virtual resource characteristics obtained by the user, the virtual resource grade obtained by the user/the virtual resource frequency obtained by the user; d. the statistics of the game is that the user knocks down, assists, full field best (MVP), negative feedback and other times in the game in 1/7/14/30 days, and the statistic of the winning rate, MVP rate and the like; e. the social contact feature of the opposite office, the number of direct friends/indirect friends of the user, and the number of strange friends; f. the historical characteristics of the games (namely the historical game characteristic matrix) are displayed by the users in the next 17 games (including negative feedback, game knockdown, game auxiliary times, game win-lose and game MVP performance and game duration).
And (3) looking up a table in a log table, extracting different features by an off-line task calculation mode, and then splicing the features of the same pair together. For the characteristics of different parties of the same user, the label (i.e. the predictive label) is whether the user is active or not the next day, the user is not lost if the user is active, and the user is lost if the user is not active, so that a plurality of samples with supervised labels are formed. And the same active user has at least more than one game in one day, in order to avoid the deviation of the users from the game, the samples are sampled according to the users, namely, one game is randomly selected in each active user. With the sampled samples, the model selection affects the prediction effect very much. The basic model is a tree model, and if all the features are flattened by the input of the tree model, the a-f share tens of thousands of dimension features, and tens of millions of samples are trained, so that the performance is bottleneck. If the feature is reduced, the feature importance of the feature is low because the tree model fails to model the sequential time sequence relationship between the starting and the ending through training of the data volume of hundred thousand levels to find the history feature in f, especially the previous history of the game in the past. Further, the same classification effect can be obtained as that obtained by removing the past history features. In summary, the tree model serves as an on-line reference model, lacking in characterization of office history features.
The Deep Neural Network (DNN) model can effectively solve the problem that a tree model lacks of historical contrast characteristic characterization, and the contrast sequence GRU model provided by the scheme is a sequence model with the best model prediction effect and the optimal performance after comparing with a plurality of DNN sequence models. The model based on the interoffice sequence GRU is described in detail below.
The network structure of the model based on the game sequence GRU is shown in FIG. 15, and the input is the a-f characteristics and represents different characteristic types at the input layer of the model; and (3) carrying out equal-frequency or equidistant division on different features of the a-f in an embedding layer of the model, and dividing the different features into different feature emmbeddings, wherein the emmbeddings corresponding to the f are a list, namely N historical bureaus exist, and the emmbeddings list has N lengths. At the full connection layer (i.e. the feature mapping module), the embellishments corresponding to a-f are spliced (concat), and for the list of the embellishments corresponding to f, the average of the list is taken, and then the list is spliced with other embellishments. FM (factor machine), the factoring machine (i.e., the feature crossing module) inputs only the a-e features described above for feature crossing and combining of the several features, and user group features such as user segments and user ratings in the b features can be crossed out as high segment ratings. LR (logistic regression) layer: i.e., the logistic regression layer (i.e., the feature regression module described above) is a layer that resembles the underlying model. The update formula of GRU (Gate Recurrent Unit) layer (i.e. the feature fusion module) is shown in formula (1):
(1);
Wherein,representing the input of the GRU layer,representing the output of the GRU layer,indicating that the update is to be gated,indicating reset gating.Andare the weights of the GRU layers,indicating the hidden state at time t-1,indicating a hidden state at time tIn the state of being in a state,as a result of the candidate hidden state,and (3) withEach representing a different activation function in the GRU layer,representing per-element multiplication. For the hidden state at the time t, the hidden state sum at the time t-1Hadamard product (representing the hidden state updated from the previous hidden state) is applied to the input reset gateAndhadamard product (representing the hidden state after being reset from the previous hidden state) is made.
In the present embodiment, the input to the GRU is a list. Because of the time sequence relationship among the plurality of pairs, the pairs can be regarded as hidden states of GRUs at different times t. When the user opens a new office state, the hidden state of the office is quickly updated by the characteristics (including negative feedback, speech, knockdown, assisted number of rounds, win-win, MVP performance) generated by the previous office. Thus, the comprehensive game situation of the game sequence user is learned, and the experience of the user in the game is better reflected.
In comparison with other protocols, as shown in table 1, the effect of this protocol was evident in the off-line test. Wherein, the accuracy characterization index of the two classifications of whether the user is lost includes the AUC of the two classifications.
Table 1 various schemes are shown for this result
According to the method provided by the embodiment of the application, the historical game sequence of the active user is modeled in an important mode, sequence characteristics in the historical game of the active user are added into the GRU for training, the historical game memory of the active user is obtained, and whether the active user can run off is predicted. The method can sense the sensitivity of the active user to the historical game, and after the past bad historical game is played, some game rewards are timely issued to the active user, so that the loss of the active user caused by the bad game experience is avoided. The method can improve game experience, improve user viscosity and directly prevent loss of users.
It can be understood that, in the embodiments of the present application, the content of the user information, for example, log data, anti-loss processing policy, etc., if the data related to the user information or the enterprise information is related, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, or the information needs to be subjected to fuzzy processing to eliminate the correspondence between the information and the user; and the related data collection and processing should be strictly according to the requirements of relevant national laws and regulations when the example is applied, obtain the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing behaviors within the authorized scope of laws and regulations and personal information body.
Continuing with the description below of an exemplary architecture implemented as a software module for virtual scene processing device 455 provided in embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in virtual scene processing device 455 of memory 450 may include: the data preprocessing module 4551 is configured to perform data preprocessing on log data of a target object in a preset historical time period in a current virtual scene to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix; the feature processing module 4552 is configured to extract basic features of the basic feature vectors through a feature intersection module and a feature regression module of the pre-trained data processing model, obtain basic intersection features and basic regression features correspondingly, and perform feature mapping on the target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features; the historical feature extraction module 4553 is configured to perform historical feature extraction on the historical feature matrix through a gating circulation module of the pre-trained data processing model, so as to obtain a historical feature for characterizing the comprehensive feature of the target object in the preset historical time period; the feature fusion processing module 4554 is configured to perform feature fusion processing on the basic cross feature, the basic regression feature, the target mapping feature, and the historical contrast feature through the feature fusion module of the pre-trained data processing model, so as to obtain a model prediction probability for representing the activity degree of the target object for the current virtual scene at a next time after the current time; and the determining module 4555 is configured to determine an anti-churn processing policy for the target object in the current virtual scene based on the model prediction probability.
In some embodiments, the log data includes base information of the target object and log data of a plurality of historical logs of the target object over the preset historical period; the history game feature matrix is used for representing the time sequence relation of the log data; the data preprocessing module 4551 is further configured to: log screening is carried out on the log data in the preset historical time period to obtain the basic information and the log data; extracting features of the basic information to obtain the basic feature vector; extracting characteristics of each pair of data in the plurality of pairs of data, and correspondingly obtaining target log characteristics of each historical pair; and splicing the target log features corresponding to the multiple historical checking according to the sequence of the checking time of the multiple historical checking to obtain the historical checking feature matrix.
In some embodiments, the data preprocessing module 4551 is further configured to: extracting features of the basic information to obtain an initial feature vector; and carrying out feature division on the initial feature vector to obtain a plurality of basic feature vectors under different feature dimensions.
In some embodiments, the data preprocessing module 4551 is further configured to: under the condition that the target object is detected to be in a login state or a preliminary log-in state in the current virtual scene, basic log data and historical log data of the target object in the current virtual scene in a preset historical time period are obtained; extracting basic information of the target object from the basic log data; extracting a plurality of game data of the target object and the game time of each game data from the history game log data.
In some embodiments, the feature processing module 4552 further: averaging each column in the history contrast feature matrix through a feature mapping module of the pre-trained data processing model to obtain a corresponding average value of each column; constructing an average embedded feature vector based on the average corresponding to each column; and performing splicing processing on the basic feature vector and the mean embedded feature vector through at least one full connection layer of the feature mapping module to obtain the target mapping feature.
In some embodiments, each row of the historical interoffice feature matrix is used to characterize the interoffice data for a single historical interoffice at the same interoffice time; the historical feature extraction module 4553 is further configured to: determining a hidden state obtained after updating each row of the historical office feature matrix; aiming at the t-th row characteristic of the history contrast characteristic matrix, acquiring a t-1 hidden state obtained after updating the t-1 th row characteristic of the history contrast characteristic matrix through the gating circulation module; determining an update gating output of an update gate of the gating cycle module on a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a reset gating output of a reset gate of the gating loop module at a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a hidden state obtained after the t row of the history contrast feature matrix is updated based on the updated gating output of the t row, the reset gating output of the t row and the t-1 hidden state; t is an integer greater than 1, and the maximum value of t is equal to the total number of rows of the history contrast feature matrix; and based on a preset activation function, activating the hidden state obtained after updating the last row of the history contrast feature matrix to obtain the history contrast feature.
In some embodiments, the determining module 4555 is further configured to: determining a prediction result for the target object in the current virtual scene based on the model prediction probability; when the predicted result is that the target object is in a loss state at the next moment, starting an anti-loss mode aiming at the target object in the current virtual scene; and when the prediction result is that the target object is in a non-loss state at the next moment, maintaining a normal game mode for the target object in the current virtual scene.
In some embodiments, the determining module 4555 is further configured to: determining that the predicted result is that the target object is in the loss state at the next moment under the condition that the model predicted probability is greater than or equal to a preset loss probability threshold value; and under the condition that the model prediction probability is smaller than the preset loss probability threshold value, determining that the prediction result is that the target object is in the non-loss state at the next moment.
In some embodiments, the apparatus 455 further comprises a model training module for obtaining sample data; the sample data comprises sample log data and a prediction tag of the sample log data, wherein the sample log data is related log data aiming at least one sample target object; sample pretreatment is carried out on the sample log data to obtain sample target feature vectors, and the sample target feature vectors are input into a data processing model to be trained; the sample target feature vector comprises a sample basic feature vector and a sample history diagonal feature matrix; respectively extracting basic features of the sample basic feature vectors through a feature crossing module and a feature regression module of the data processing model to correspondingly obtain sample basic crossing features and sample basic regression features, and carrying out feature mapping on the sample target feature vectors through a feature mapping module of the data processing model to obtain sample target mapping features; carrying out historical feature extraction on the sample historical contrast feature matrix through a gating circulation module of the data processing model to obtain sample historical contrast features for representing sample comprehensive contrast performance of the sample target object in a preset sample historical time period; performing feature fusion processing on the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history diagonal features through a feature fusion module of the data processing model to obtain sample prediction probability; determining a loss result based on the sample prediction probability and the prediction tag; and updating model parameters in the data processing model based on the loss result to obtain the pre-trained data processing model.
In some embodiments, the sample log data includes sample base information of the sample target object and sample log data of a plurality of sample history logs of the sample target object over the preset sample history period; the sample history diagonal feature matrix is used for representing the time sequence relation of the sample log data; the model training module is further configured to: performing log screening on the sample log data aiming at each sample target object to obtain the sample basic information and the sample log data; extracting features of the sample basic information to obtain the sample basic feature vector; extracting characteristics of the office data of each sample in the office data of a plurality of samples, and correspondingly obtaining sample target log characteristics of each sample history office; and splicing the sample target log features corresponding to the plurality of sample historical checking according to the sequence of the checking time of the plurality of sample historical checking, so as to obtain the sample historical checking feature matrix.
It should be noted that, the description of the apparatus in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. For technical details not disclosed in the embodiments of the present apparatus, please refer to the description of the embodiments of the method of the present application for understanding.
The embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, will cause the processor to perform the virtual scene processing method provided by the embodiments of the present application, for example, the virtual scene processing method as shown in fig. 3.
Embodiments of the present application provide a computer program product comprising computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the electronic device executes the virtual scene processing method according to the embodiment of the application.
In some embodiments, the computer readable storage medium may be RAM, ROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (Hyper Text Markup Language, HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A virtual scene processing method, the method comprising:
carrying out data preprocessing on log data of a target object in a preset historical time period in a current virtual scene to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix;
Respectively extracting basic features of the basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on the target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features;
performing historical feature extraction on the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model to obtain historical contrast features for representing comprehensive contrast performance of the target object in the preset historical time period;
performing feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical contrast features through a feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment;
and determining an anti-churn processing strategy aiming at the target object in the current virtual scene based on the model prediction probability.
2. The method according to claim 1, wherein the log data includes base information of the target object and log data of a plurality of historical logs of the target object within the preset historical period; the history game feature matrix is used for representing the time sequence relation of the log data;
the data preprocessing is performed on the log data of the target object in the current virtual scene in the preset historical time period to obtain a target feature vector, and the method comprises the following steps:
log screening is carried out on the log data in the preset historical time period to obtain the basic information and the log data;
extracting features of the basic information to obtain the basic feature vector;
extracting characteristics of each pair of data in the plurality of pairs of data, and correspondingly obtaining target log characteristics of each historical pair;
and splicing the target log features corresponding to the multiple historical checking according to the sequence of the checking time of the multiple historical checking to obtain the historical checking feature matrix.
3. The method according to claim 2, wherein the feature extracting the base information to obtain the base feature vector includes:
Extracting features of the basic information to obtain an initial feature vector;
and carrying out feature division on the initial feature vector to obtain a plurality of basic feature vectors under different feature dimensions.
4. The method according to claim 2, wherein the log filtering the log data in the preset historical period to obtain the base information and the game data includes:
under the condition that the target object is detected to be in a login state or a preliminary log-in state in the current virtual scene, basic log data and historical log data of the target object in the current virtual scene in a preset historical time period are obtained;
extracting basic information of the target object from the basic log data;
extracting a plurality of game data of the target object and the game time of each game data from the history game log data.
5. The method according to claim 1, wherein the feature mapping, by the feature mapping module of the pre-trained data processing model, the target feature vector to obtain a target mapped feature, includes:
averaging each column in the history contrast feature matrix through a feature mapping module of the pre-trained data processing model to obtain a corresponding average value of each column;
Constructing an average embedded feature vector based on the average corresponding to each column;
and performing splicing processing on the basic feature vector and the mean embedded feature vector through at least one full connection layer of the feature mapping module to obtain the target mapping feature.
6. The method of claim 1, wherein each row of the historical checking feature matrix is used to characterize checking data of a historical checking at the same checking time;
the history feature extraction is performed on the history feature matrix by the gating circulation module of the pre-trained data processing model to obtain history feature for representing the comprehensive game performance of the target object in the preset history time period, including:
determining a hidden state obtained after updating each row of the historical office feature matrix; aiming at the t-th row characteristic of the history contrast characteristic matrix, acquiring a t-1 hidden state obtained after updating the t-1 th row characteristic of the history contrast characteristic matrix through the gating circulation module; determining an update gating output of an update gate of the gating cycle module on a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a reset gating output of a reset gate of the gating loop module at a t-th row of the history diagonal feature matrix based on the t-th row feature and the t-1 hidden state; determining a hidden state obtained after the t row of the history contrast feature matrix is updated based on the updated gating output of the t row, the reset gating output of the t row and the t-1 hidden state; t is an integer greater than 1, and the maximum value of t is equal to the total number of rows of the history contrast feature matrix;
And based on a preset activation function, activating the hidden state obtained after updating the last row of the history contrast feature matrix to obtain the history contrast feature.
7. The method of claim 1, wherein the determining an anti-churn processing policy for the target object in the current virtual scenario based on the model predictive probability comprises:
determining a prediction result for the target object in the current virtual scene based on the model prediction probability;
when the predicted result is that the target object is in a loss state at the next moment, starting an anti-loss mode aiming at the target object in the current virtual scene;
and when the prediction result is that the target object is in a non-loss state at the next moment, maintaining a normal game mode for the target object in the current virtual scene.
8. The method of claim 7, wherein the determining a prediction result for the target object in the current virtual scene based on the model prediction probabilities comprises:
determining that the predicted result is that the target object is in the loss state at the next moment under the condition that the model predicted probability is greater than or equal to a preset loss probability threshold value;
And under the condition that the model prediction probability is smaller than the preset loss probability threshold value, determining that the prediction result is that the target object is in the non-loss state at the next moment.
9. The method according to any one of claims 1 to 8, wherein the pre-trained data processing model is trained by:
acquiring sample data; the sample data comprises sample log data and a prediction tag of the sample log data, wherein the sample log data is related log data aiming at least one sample target object;
sample pretreatment is carried out on the sample log data to obtain sample target feature vectors, and the sample target feature vectors are input into a data processing model to be trained; the sample target feature vector comprises a sample basic feature vector and a sample history diagonal feature matrix;
respectively extracting basic features of the sample basic feature vectors through a feature crossing module and a feature regression module of the data processing model to correspondingly obtain sample basic crossing features and sample basic regression features, and carrying out feature mapping on the sample target feature vectors through a feature mapping module of the data processing model to obtain sample target mapping features;
Carrying out historical feature extraction on the sample historical contrast feature matrix through a gating circulation module of the data processing model to obtain sample historical contrast features for representing sample comprehensive contrast performance of the sample target object in a preset sample historical time period;
performing feature fusion processing on the sample basic cross features, the sample basic regression features, the sample target mapping features and the sample history diagonal features through a feature fusion module of the data processing model to obtain sample prediction probability;
carrying out loss calculation based on the sample prediction probability and the prediction label to obtain a loss result;
and updating model parameters in the data processing model based on the loss result to obtain the pre-trained data processing model.
10. The method of claim 9, wherein the sample log data includes sample base information of the sample target object and sample log data of a plurality of sample history log of the sample target object over the preset sample history period; the sample history diagonal feature matrix is used for representing the time sequence relation of the sample log data; the sample preprocessing is performed on the sample log data to obtain a sample target feature vector, which comprises the following steps:
Performing log screening on the sample log data aiming at each sample target object to obtain the sample basic information and the sample log data;
extracting features of the sample basic information to obtain the sample basic feature vector;
extracting characteristics of the office data of each sample in the office data of a plurality of samples, and correspondingly obtaining sample target log characteristics of each sample history office;
and splicing the sample target log features corresponding to the plurality of sample historical checking according to the sequence of the checking time of the plurality of sample historical checking, so as to obtain the sample historical checking feature matrix.
11. A virtual scene processing apparatus, the apparatus comprising:
the data preprocessing module is used for preprocessing the log data of the target object in the current virtual scene in a preset historical time period to obtain a target feature vector; the target feature vector comprises a basic feature vector and a history contrast feature matrix;
the feature processing module is used for respectively extracting basic features of the basic feature vectors through a feature crossing module and a feature regression module of the pre-trained data processing model to correspondingly obtain basic crossing features and basic regression features, and carrying out feature mapping on the target feature vectors through a feature mapping module of the pre-trained data processing model to obtain target mapping features;
The historical feature extraction module is used for extracting historical features of the historical contrast feature matrix through a gating circulation module of the pre-trained data processing model to obtain historical contrast features used for representing the comprehensive contrast performance of the target object in the preset historical time period;
the feature fusion module is used for carrying out feature fusion processing on the basic cross features, the basic regression features, the target mapping features and the historical diagonal features through the feature fusion module of the pre-trained data processing model to obtain model prediction probability for representing the activity degree of the target object in the current virtual scene at the next moment after the current moment;
and the determining module is used for determining an anti-loss processing strategy aiming at the target object in the current virtual scene based on the model prediction probability.
12. An electronic device, comprising:
a memory for storing computer executable instructions;
a processor for implementing the virtual scene processing method of any of claims 1 to 10 when executing computer executable instructions stored in the memory.
13. A computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the virtual scene processing method of any of claims 1 to 10.
14. A computer program product comprising computer executable instructions stored on a computer readable storage medium;
wherein the processor of the electronic device reads the computer executable instructions from the computer readable storage medium and executes the computer executable instructions to implement the virtual scene processing method of any of claims 1 to 10.
CN202410102302.3A 2024-01-25 2024-01-25 Virtual scene processing method and device, electronic equipment and storage medium Active CN117618918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410102302.3A CN117618918B (en) 2024-01-25 2024-01-25 Virtual scene processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410102302.3A CN117618918B (en) 2024-01-25 2024-01-25 Virtual scene processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117618918A true CN117618918A (en) 2024-03-01
CN117618918B CN117618918B (en) 2024-04-19

Family

ID=90023790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410102302.3A Active CN117618918B (en) 2024-01-25 2024-01-25 Virtual scene processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117618918B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537587A (en) * 2018-04-03 2018-09-14 广州优视网络科技有限公司 It is lost in user's method for early warning, device, computer readable storage medium and server
US20180369696A1 (en) * 2016-03-08 2018-12-27 Electronic Arts Inc. Multiplayer video game matchmaking optimization
CN111821694A (en) * 2020-07-24 2020-10-27 北京达佳互联信息技术有限公司 Loss prevention method and device for new game user, electronic equipment and storage medium
CN113947246A (en) * 2021-10-21 2022-01-18 腾讯科技(深圳)有限公司 Artificial intelligence-based loss processing method and device and electronic equipment
CN114463036A (en) * 2021-12-24 2022-05-10 深圳前海微众银行股份有限公司 Information processing method and device and storage medium
CN114870403A (en) * 2022-05-09 2022-08-09 网易(杭州)网络有限公司 Battle matching method, device, equipment and storage medium in game
CN115845394A (en) * 2022-12-15 2023-03-28 北京字跳网络技术有限公司 Model training method, game-to-game player matching method, medium, and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180369696A1 (en) * 2016-03-08 2018-12-27 Electronic Arts Inc. Multiplayer video game matchmaking optimization
CN108537587A (en) * 2018-04-03 2018-09-14 广州优视网络科技有限公司 It is lost in user's method for early warning, device, computer readable storage medium and server
CN111821694A (en) * 2020-07-24 2020-10-27 北京达佳互联信息技术有限公司 Loss prevention method and device for new game user, electronic equipment and storage medium
CN113947246A (en) * 2021-10-21 2022-01-18 腾讯科技(深圳)有限公司 Artificial intelligence-based loss processing method and device and electronic equipment
CN114463036A (en) * 2021-12-24 2022-05-10 深圳前海微众银行股份有限公司 Information processing method and device and storage medium
CN114870403A (en) * 2022-05-09 2022-08-09 网易(杭州)网络有限公司 Battle matching method, device, equipment and storage medium in game
CN115845394A (en) * 2022-12-15 2023-03-28 北京字跳网络技术有限公司 Model training method, game-to-game player matching method, medium, and device

Also Published As

Publication number Publication date
CN117618918B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11250322B2 (en) Self-healing machine learning system for transformed data
US20220176248A1 (en) Information processing method and apparatus, computer readable storage medium, and electronic device
US20150328550A1 (en) Context-Aware Gamification Platform
CN112256537B (en) Model running state display method and device, computer equipment and storage medium
CN111738294B (en) AI model training method, AI model using method, computer device, and storage medium
CN112138394B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114392560B (en) Method, device, equipment and storage medium for processing running data of virtual scene
CA3187040A1 (en) Techniques for identity data characterization for data protection
CN112418259A (en) Method for configuring real-time rules based on user behaviors in live broadcast process, computer equipment and readable storage medium
CN112402982B (en) User cheating behavior detection method and system based on machine learning
CN115221396A (en) Information recommendation method and device based on artificial intelligence and electronic equipment
CN113947246B (en) Loss processing method and device based on artificial intelligence and electronic equipment
Ranathunga et al. Interfacing a cognitive agent platform with second life
CN112860579B (en) Service testing method, device, storage medium and equipment
JP2024507338A (en) Automatic detection of prohibited gaming content
CN117618918B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN116943220A (en) Game artificial intelligence control method, device, equipment and storage medium
CN116956005A (en) Training method, device, equipment, storage medium and product of data analysis model
CN112245934A (en) Data analysis method, device and equipment for virtual resources in virtual scene application
CN117312979A (en) Object classification method, classification model training method and electronic equipment
CN117009626A (en) Service processing method, device, equipment and storage medium of game scene
CN116050520A (en) Risk processing model training method, risk object processing method and related devices
CN115191002A (en) Matching system, matching method, and matching program
CN114945036B (en) Processing method of shared data, related equipment and medium
JP7352009B2 (en) Malicious game detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant