CN116983620A - Data processing method, device, equipment and storage medium in virtual scene - Google Patents

Data processing method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN116983620A
CN116983620A CN202210447672.1A CN202210447672A CN116983620A CN 116983620 A CN116983620 A CN 116983620A CN 202210447672 A CN202210447672 A CN 202210447672A CN 116983620 A CN116983620 A CN 116983620A
Authority
CN
China
Prior art keywords
interaction
virtual
target
behaviors
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210447672.1A
Other languages
Chinese (zh)
Inventor
王达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202210447672.1A priority Critical patent/CN116983620A/en
Publication of CN116983620A publication Critical patent/CN116983620A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • A63F13/497Partially or entirely replaying previous game actions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories

Abstract

The application provides a data processing method, a device, equipment and a computer readable storage medium in a virtual scene; the method comprises the following steps: acquiring log data of a target virtual object in a virtual scene, wherein the log data is used for recording interaction data of the target virtual object in the virtual scene for interacting with other virtual objects; analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of other virtual objects after the execution of each interaction behavior; and establishing association relations between each interaction behavior and object states of other virtual objects, and outputting the association relations. By the method and the device, the interaction process among the virtual objects in the virtual scene can be quickly traced back, and man-machine interaction experience is improved.

Description

Data processing method, device, equipment and storage medium in virtual scene
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a computer readable storage medium, and a computer program product for processing data in a virtual scene.
Background
With the rapid development of internet technology, people have increasingly high requirements on entertainment forms, and games are a common interaction means. Playback of the interaction of player characters controlled by a player is often required in a game.
In the related art, the playback and amplification of the interaction process are all confirmed by videos, and this method is time-consuming, low in efficiency and not accurate enough.
Disclosure of Invention
The embodiment of the application provides a data processing method, device, equipment, a computer readable storage medium and a computer program product in a virtual scene, which can quickly trace back and display the interaction process among virtual objects in the virtual scene and improve the man-machine interaction experience.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a data processing method in a virtual scene, which comprises the following steps:
acquiring log data of a target virtual object in a virtual scene, wherein the log data is used for recording interaction data of the target virtual object in the virtual scene for interacting with other virtual objects;
analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of the other virtual objects after the interaction behaviors are executed;
And establishing association relations between the interaction behaviors and the object states of the other virtual objects, and outputting the association relations.
The embodiment of the application provides a data processing device in a virtual scene, which comprises:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring log data of a target virtual object in a virtual scene, and the log data is used for recording interaction data of the target virtual object in the virtual scene for interacting with other virtual objects;
the analysis module is used for analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of the other virtual objects after the interaction behaviors are executed;
and the output module is used for establishing the association relation between each interaction behavior and the object states of the other virtual objects and outputting the association relation.
In the above scheme, the output module is further configured to determine a first display graph for identifying each of the interactions and a second display graph for identifying a state of each of the objects;
and drawing the graph based on the first display graph and the second display graph to obtain a target display graph for indicating the association relationship.
In the above scheme, the output module is further configured to determine an execution sequence of each of the interactive behaviors when the number of the interactive behaviors is at least two;
drawing a first time axis, and drawing first display graphics corresponding to each interaction behavior on the first time axis, wherein the positions of the first display graphics on the first time axis correspond to the execution sequence of the corresponding interaction behaviors;
drawing a second time axis, and drawing a second display graph corresponding to each object state on the second time axis;
the position of the second display graph on the second time axis corresponds to the position of the corresponding first display graph on the first time axis.
In the above scheme, the first display graph is a columnar graph, and the output module is further configured to obtain an injury attribute value of each interaction behavior when the interaction behavior has an injury attribute; and is combined with
And drawing columnar graphs corresponding to the interaction behaviors on the first time axis based on the injury attribute values of the interaction behaviors, wherein the heights of the columnar graphs are used for indicating the magnitude of the injury attribute values.
In the above scheme, the output module is further configured to obtain execution time points of each of the interaction behaviors;
marking execution time points of the interaction behaviors on the first time axis, and drawing a first display graph of the corresponding interaction behaviors at the execution time points.
In the above scheme, the output module is further configured to obtain life values of the other virtual objects at different time points within a target time period;
the target time period takes the execution time point of the first interaction action in the at least two interaction actions as a starting time and the execution time point of the last interaction action as an ending time;
and drawing a line graph corresponding to the life values of the other virtual objects in the target time period on the first time axis.
In the above scheme, the output module is further configured to determine a type of each of the interactive behaviors when the number of the interactive behaviors is at least two;
and counting the number of the interaction behaviors of each type, and outputting the number of the interaction behaviors of each type.
In the above scheme, the output module is further configured to obtain an injury attribute value of each of the interactions when the interactions have injury attributes, and count a sum of the injury attribute values of each of the interactions of the types;
And outputting the sum of the injury attribute values of the interaction behaviors of the types.
In the above scheme, the output module is further configured to draw a horizontal axis for indicating the injury attribute value, and draw a vertical axis marked with the identifier of each interaction operation;
drawing a target columnar graph corresponding to each type of interaction behavior based on the sum of injury attribute values of each type of interaction behavior in a coordinate system formed by the horizontal axis and the vertical axis;
the height of the target columnar graph corresponds to the sum of injury attribute values of the interaction behaviors of the types.
In the above scheme, the output module is further configured to obtain an injury attribute value of each interaction behavior when the interaction behavior has an injury attribute;
screening out the interaction behaviors of which the injury attribute values reach the injury attribute value threshold from the at least two interaction behaviors as target interaction behaviors; in a corresponding manner,
in the above scheme, the output module is further configured to determine a type of each target interaction behavior;
outputting the number of the target interaction behaviors of each type.
In the above scheme, the output module is further configured to analyze the interaction data recorded in the log data to obtain the interaction behavior executed by the other virtual objects for the target virtual object, and the object state of the target virtual object after the interaction behavior of each other virtual object is executed;
And establishing a target association relation between the interaction behaviors executed by the other virtual objects and the object states of the target virtual objects, and outputting the target association relation.
In the above scheme, when the log data records interaction data of the target virtual object interacting with the other virtual objects in at least two interaction pairs, the data processing device in the virtual scene further includes a data segmentation module, after obtaining the log data of the target virtual object in the virtual scene, the data segmentation module is configured to segment the log data to obtain segmented log data corresponding to each interaction pair, where the segmented log data records interaction data of the target virtual object interacting with the other virtual objects in the corresponding interaction pair;
in the above scheme, the analysis module is further configured to analyze, for each piece of segmented log data corresponding to the interaction pair, the interaction data recorded in the segmented log data, to obtain at least one interaction behavior of the target virtual object executed in the corresponding interaction pair, and an object state of the other virtual object after each interaction behavior is executed.
In the above scheme, the output module is further configured to obtain at least one statistical parameter of the target virtual object in the virtual scene;
wherein the statistical parameter comprises at least one of: the utilization rate of the interaction behavior, the interaction success rate of the interaction behavior and the interaction attribute value accumulation amount of the interaction behavior;
outputting the value of the at least one statistical parameter.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the data processing method in the virtual scene when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the data processing method in the virtual scene provided by the embodiment of the application.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions for causing a processor to execute the computer program or instructions to realize the data processing method in the virtual scene.
The embodiment of the application has the following beneficial effects:
By applying the embodiment of the application, the log data of the target virtual object in the virtual scene is analyzed, at least one interaction behavior executed by the target virtual object and the object states of other virtual objects after the execution of each interaction behavior can be obtained quickly, the association relation between each interaction behavior and the object states of other virtual objects is established, and the association relation is further output, so that the analysis efficiency of the association relation is high, the interaction process between each virtual object in the virtual scene can be traced back quickly, and the man-machine interaction experience is improved.
Drawings
FIG. 1 is a schematic diagram of a data processing system in a virtual scenario provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device for implementing a data method in a virtual scene according to an embodiment of the present application;
fig. 3 is a flow chart of a data processing method in a virtual scene according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a segment log data acquisition mode for an interaction pair according to an embodiment of the present application;
FIG. 5 is a schematic diagram of event data in a game scenario provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a visual output method of association relationship provided by an embodiment of the present application;
FIG. 7 is a schematic diagram showing interaction behavior and object states according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a method for outputting an association relationship according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a bar chart illustration provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a statistical result visualization output of interaction behavior provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of determining a target association relationship according to an embodiment of the present application;
FIG. 12 is a schematic diagram of visualization of log data provided by an embodiment of the present application;
FIG. 13 is a flowchart of a method for processing data in a virtual scene according to an embodiment of the present application;
FIG. 14 is a schematic diagram of log data of a primary interaction pair provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of a visual analysis of the continuous implementation of interactive behavior provided by an embodiment of the present application;
FIG. 16 is a diagram illustrating a graphical rendering of log data for a one-time interactive pair provided by an embodiment of the present application;
fig. 17 is a schematic diagram of notification information provided in an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
If a similar description of "first/second" appears in the application document, the following description is added, in which the terms "first/second/third" are merely distinguishing between similar objects and not representing a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, if allowed, so that embodiments of the application described herein may be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Virtual scenes, namely, a scene which is output by equipment and is different from the real world, can form visual perception of the virtual scenes through naked eyes or the assistance of equipment, for example, a two-dimensional image output by a display screen, and a three-dimensional image output by three-dimensional display technologies such as three-dimensional projection, virtual reality and augmented reality technologies; in addition, various simulated real world sensations such as auditory sensations, tactile sensations, olfactory sensations, and motion sensations can also be formed by various possible hardware.
2) And a client, an application program for providing various services, such as a game client, etc., running in the terminal.
3) Virtual objects, objects that interact in a virtual scene, objects that are under the control of a user or a robot program (e.g., an artificial intelligence based robot program) are capable of being stationary, moving, and performing various actions in the virtual scene, such as various characters in a game, and the like.
4) Skill: in action games, a player's commands and interaction instructions for a character, the skill consists of one or more action instructions.
5) The actions are as follows: a specific animation instruction and a specific behavior instruction of character action performance in the action game; such as an animation sequence.
6) Attack box: for attack type actions, 1 or more hit boxes are contained in the action playing process; triggering a hit event when the hit frame collides;
7) Status: the behavior of the character in the game and the judgment mark, the state can be used for distinguishing the action behavior of the character in the game, whether the input and other information can be accepted or not.
Based on the above explanation of terms and expressions involved in the embodiments of the present application, the following describes a data processing system in a virtual scenario provided by the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic architecture diagram of a data processing system in a virtual scenario provided by an embodiment of the present application, in order to support a data processing application in a virtual scenario, in a data processing system 100 in a virtual scenario, a terminal (a terminal 400-1 and a terminal 400-2 are shown as an example) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two. Wherein the server 200 may belong to a target server cluster, and the target server cluster includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster may be used to provide background services for applications that support a three-dimensional virtual environment.
A terminal (e.g., terminal 400-1 and terminal 400-2) installed and running with a client 410 supporting a virtual scene, for receiving a trigger operation for entering the virtual scene based on a view interface, and transmitting an acquisition request of scene data of the virtual scene to the server 200;
the server 200 is configured to receive an acquisition request of scene data, and return the scene data of the virtual scene to the terminal in response to the acquisition request;
the terminal 400 is further configured to obtain log data of the target virtual object in the virtual scene, where the log data is used to record interaction data of the target virtual object interacting with other virtual objects in the virtual scene; analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of other virtual objects after the execution of each interaction behavior; and establishing association relations between each interaction behavior and object states of other virtual objects, and outputting the association relations.
In practical applications, the server 200 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDNs, content Delivery Network), and basic cloud computing services such as big data and artificial intelligence platforms. Terminals (e.g., terminal 400-1 and terminal 400-2) may be, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart televisions, smart watches, etc. Terminals, such as terminal 400-1 and terminal 400-2, and server 200 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
In practical applications, terminals (including the terminal 400-1 and the terminal 400-2) are installed and run with application programs supporting virtual scenes. The application program may be any one of a First person shooter game (FPS), a third person shooter game, a driving game with steering operation as a dominant behavior, a multiplayer online tactical game (MOBA, multiplayer Online Battle Arena games), a Two-dimensional (2D) game application, a Three-dimensional (3D) game application, a virtual reality application, a Three-dimensional map program, or a multiplayer survival game. The application may also be a stand-alone application, such as a stand-alone 3D game program.
Taking an electronic game scene as an exemplary scene, a user can operate on the terminal in advance, after the terminal detects the operation of the user, a game configuration file of the electronic game can be downloaded, and the game configuration file can comprise an application program, interface display data or virtual scene data of the electronic game, and the like, so that the user can call the game configuration file when logging in the electronic game on the terminal, and render and display an electronic game interface. After the terminal detects the touch operation, game data corresponding to the touch operation can be determined, and rendered and displayed, wherein the game data can comprise virtual scene data, behavior data of virtual objects in the virtual scene and the like.
In practical application, a terminal (including a terminal 400-1 and a terminal 400-2) receives a trigger operation for entering a virtual scene based on a view interface, and sends a request for acquiring scene data of the virtual scene to a server 200; the server 200 receives an acquisition request of scene data, and returns the scene data of the virtual scene to the terminal in response to the acquisition request; the method comprises the steps that a terminal receives scene data of a virtual scene, and obtains log data of a target virtual object in the virtual scene, wherein the log data are used for recording interaction data of the target virtual object in the virtual scene, wherein the interaction data are used for interacting with other virtual objects; analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of other virtual objects after the execution of each interaction behavior; and establishing association relations between each interaction behavior and object states of other virtual objects, and outputting the association relations. Thus, the interaction process among the virtual objects in the virtual scene can be displayed quickly and intuitively.
The embodiment of the application can also be realized by means of Cloud Technology (Cloud Technology), wherein the Cloud Technology refers to a hosting Technology for integrating serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device for implementing a data processing method in a virtual scene according to an embodiment of the present application, and in practical application, an electronic device 500 may be implemented as a server or a terminal in fig. 1, and an electronic device for implementing a data processing method in a virtual scene according to an embodiment of the present application is described. The electronic device 500 shown in fig. 2 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that bus system 540 is used to facilitate connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The processor 510 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the data processing apparatus in the virtual scene provided by the embodiments of the present application may be implemented in a software manner, fig. 2 shows a schematic structural diagram of the electronic device provided by the embodiments of the present application as a server for providing data processing in the virtual scene, and the data processing apparatus 555 in the virtual scene stored in the memory 550 may be software in the form of a program, a plug-in, or the like, including the following software modules: the acquisition module 5551, analysis module 5552 and output module 5554 are logical and therefore may be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the data processing apparatus in the virtual scenario provided by the embodiments of the present application may be implemented in hardware, and by way of example, the data processing apparatus in the virtual scenario provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the data processing method in the virtual scenario provided by the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circui t), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
Based on the above description of the data processing system and the electronic device in the virtual scene provided by the embodiment of the present application, the following describes a data processing method in the virtual scene provided by the embodiment of the present application. In some embodiments, the method for processing data in a virtual scene provided by the embodiments of the present application may be implemented by a server or a terminal alone or in cooperation with the server and the terminal. In some embodiments, the terminal or the server may implement the data processing method in the virtual scene provided by the embodiment of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; may be a native (Na active) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a client that supports virtual scenarios, such as a game APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The following describes a data processing method in a virtual scene according to the embodiment of the present application by using a terminal embodiment as an example. Referring to fig. 3, fig. 3 is a flowchart of a data processing method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
In step 101, the terminal acquires log data of a target virtual object in a virtual scene.
In actual implementation, the log data is used for recording interaction data of the target virtual object interacting with other virtual objects in the virtual scene.
Illustratively, taking a virtual scene as a combat scene as an example, player characters controlled by players are virtually exclusive, and player characters controlled by other players are other virtual objects. In a combat scenario, the log data may be used to record the interactions performed by player characters on other players in the combat scenario, and the effects of performing these interactions on other players.
In practical application, the terminal may acquire log data in a manner that, for example, the terminal presents a man-machine interaction interface for inputting log data for a virtual scene, so that a user can input locally cached log data based on the man-machine interaction interface. The terminal may also automatically read locally cached log data in response to a trigger operation for an analysis function item of the log data.
In step 102, the interaction data recorded in the log data is analyzed to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and an object state of the virtual object for each interaction behavior.
In practical implementation, the interaction behavior in the log data refers to the interaction behavior that the target virtual object interacts successfully with other virtual objects, and corresponds to a hit event in the interaction data, in practical application, the target virtual object may release skills multiple times and may not act on other virtual objects (i.e. miss).
In actual implementation, the terminal extracts event data included in the interaction data in the log data, where the event data includes a hit event and a state event, determines what interaction behavior is performed by the target virtual object based on the event, and a corresponding object state, where the object state may be used to indicate an interaction result of the interaction behavior on other virtual objects.
For example, when the virtual scenario is a virtual combat scenario, the interaction behavior may be regarded as a plurality of skill release behaviors of the player character (i.e., the target virtual object), and at this time, the object state may be a usage skill hard state after the skill release (aggressor) releases the skill, or a hit hard state after the aggressor receives the skill released by the aggressor, or the like.
In some embodiments, referring to fig. 4, fig. 4 is a schematic diagram of a segmented log data obtaining manner for an interaction pair provided by an embodiment of the present application, and in combination with the steps shown in fig. 4, description is given of when log data is recorded, and a target virtual object interacts with other virtual objects in at least two interaction pairs. The terminal may acquire the interaction behavior in a single interaction pair in the following manner.
In step 201, the terminal segments the log data to obtain segmented log data corresponding to each interaction pair, where the segmented log data records interaction data of the target virtual object interacting with other virtual objects in the corresponding interaction pair.
In actual implementation, in log data corresponding to a virtual scene including a plurality of interaction pairs, the terminal may segment the log data to obtain log data corresponding to each interaction pair. The log data is divided by taking the interaction pair as a dividing unit, so that segmented log data corresponding to a plurality of interaction pairs are obtained.
Referring to fig. 5, in an exemplary embodiment of the present application, fig. 5 is a schematic diagram of event data in a Game scenario provided by the present application, in the figure, a keyword GamePlayStart is used to represent a start of a Game of interaction, a keyword GamePlayEnd is used to represent an end of a Game of interaction, and a terminal may divide log data based on the GamePlayStart and the GamePlayEnd keywords to obtain interaction segment logs corresponding to a plurality of interaction pairs.
Step 202, analyzing the segmented log data corresponding to each interaction pair, and obtaining at least one interaction behavior of the target virtual object executed in the corresponding interaction pair and object states of other virtual objects after each interaction behavior is executed.
With the above example in mind, referring to fig. 5, the process event recorded in the segmented log data is parsed, wherein the process event includes basic information of the event, such as an event execution time point (time), a time execution Frame number (Frame), an event Type (Type), and the like.
In step 103, an association relationship between each interaction behavior and the object state of the corresponding virtual object is established, and the association relationship is output.
In actual implementation, the terminal analyzes the log data to obtain a plurality of interaction behaviors of the target virtual object, object states corresponding to the target virtual object after executing each interaction behavior, and object states of reactions of other virtual objects when the interaction behavior acts on the other virtual objects. Because the interaction behaviors obtained based on log data analysis and the object states of the virtual objects after the interaction behaviors are executed are mutually independent, the terminal can establish the association relationship between the interaction behaviors and the object states of the corresponding virtual objects and output the association relationship in a visual mode for the convenience of analysis. For example, a chart (bar chart, scatter chart, pie chart, etc.), a comparison list, and a highlighting method of key effect can be adopted to intuitively and clearly show the association relationship.
In some embodiments, referring to fig. 6, fig. 6 is a schematic diagram of an association relationship visualization output method provided by an embodiment of the present application. The association relation visualization output method is described with reference to the steps shown in fig. 6.
In step 1031, the terminal determines a first presentation graphic for identifying each interaction behavior and a second presentation graphic for identifying each object state.
In actual implementation, the terminal can display each analyzed interaction behavior through different display styles. For example, the terminal can read configuration information between the display graphics and the interaction behavior which are preconfigured for the virtual scene and the corresponding interaction state. The terminal can also randomly allocate corresponding graphics, colors corresponding to the graphics and the like for each interaction behavior in the service code implementation process. In addition, for the first display graphics corresponding to the interaction behavior, the types of the interaction behavior can be classified, that is, the interaction type corresponding to the interaction behavior is obtained, and a corresponding first display graphics are set for each interaction type, where the same interaction type may include at least one interaction behavior, that is, when the graphics are displayed, at least one interaction behavior belonging to the same interaction type is displayed by using the same (first) display graphics. In addition, the corresponding (first) presentation graphics may also be set according to the number of interaction behaviors, i.e. one interaction behavior corresponds to one first presentation graphics without considering the interaction type.
Illustratively, for the virtual scene P, there are 5 interaction behaviors of interaction type one, where interaction type one may employ a red rectangle presentation, interaction type two may employ a blue five-pointed star presentation, and so on.
In actual implementation, the terminal may further determine a display graphic for displaying the object state based on a relationship between the object state corresponding to the corresponding interaction behavior and the second display graphic. Illustratively, taking a virtual scene as an example of a combat scene corresponding to a combat game, the object state may be understood as the result of the interaction of a player character to itself and other player characters after performing the interaction. If the interaction behavior is that the player character A attacks the player character B on the ground, the player character A is in a transient attack hard straight state after the attack, and the player character B is in a transient hit hard straight state after the attack of the player character B.
And step 1032, drawing the graph based on the first display graph and the second display graph to obtain the target display graph for indicating the association relationship.
Referring to fig. 7, an exemplary interaction behavior and object state display schematic diagram provided by the embodiment of the present application is shown in fig. 7, in which a virtual scene is taken as an example of a combat scene corresponding to a combat game (i.e., a virtual combat scene), a first display graph shown in the figure is a rectangle for displaying hit events corresponding to each interaction behavior, and a second display graph shown in the figure is a horizontal rectangle for displaying state duration of each object state.
In some embodiments, referring to fig. 8, fig. 8 is a schematic diagram of an association outputting method provided by the embodiment of the present application, based on fig. 6, step 1032 may be implemented by steps 301 to 303:
in step 301, when the number of interactive actions is at least two, the terminal determines the execution sequence of each interactive action.
In actual implementation, when the number of the interaction behaviors is a plurality of, the terminal may determine an execution sequence of each interaction behavior, that is, an execution precedence relationship between each interaction behavior. And in the virtual scene, a plurality of interaction behaviors which are executed between the target virtual object and other virtual objects and are sequenced by the sequence.
Step 302, a first time axis is drawn, and on the first time axis, a first display graph corresponding to each interaction behavior is drawn, and the position of the first display graph on the first time axis corresponds to the execution sequence of the corresponding interaction behavior.
In actual implementation, the terminal draws a first time axis, and draws a first display graph corresponding to each interaction behavior on the first time axis based on the execution sequence of each interaction behavior. Namely, the position of the first display graph corresponding to each interaction behavior on the first time axis corresponds to the execution sequence of the corresponding interaction behavior.
For example, assuming that there are 5 interactions in the virtual scene P, the corresponding execution orders are A1, A2, A3, A4, and A5 in sequence, that is, the target virtual object may sequentially execute the 5 interactions in the virtual scene, and when the 5 interactions are drawn on the first time axis, the execution order of A1- > A2- > A3 may occur, but an execution manner similar to the execution order of A3- > A2- > A4 and not consistent with the execution order may also occur.
In some embodiments, the first presentation graphic may be a columnar graphic, and the terminal may further draw the first presentation graphic corresponding to each interaction behavior in the following manner: when the interaction behaviors have injury attributes, acquiring injury attribute values of the interaction behaviors; correspondingly, the terminal draws columnar graphs corresponding to each interaction behavior on a first time axis based on the injury attribute value of each interaction behavior, wherein the height of the columnar graph is used for indicating the magnitude of the injury attribute value.
In actual implementation, the first display graph for the interaction behavior may be a histogram, and when the interaction behavior has an injury attribute for the virtual object, the height of the histogram may be used to indicate the magnitude of the injury attribute value of the interaction behavior for the virtual object, where the higher the height of the histogram, the greater the injury attribute of the interaction behavior.
Referring to fig. 9, an exemplary embodiment of the present application is shown in fig. 9, where a horizontal axis shown by reference numeral 1 in the drawing is a first time axis, and a first display graph corresponding to each interaction behavior may be drawn on the first time axis based on an execution sequence of the interaction behavior; the vertical axis shown by number 2 is the (actual) injury attribute value (which may be represented using realhirtvalue) of the interaction behavior.
In some embodiments, when the number of interactive behaviors is at least two, different display graphs are adopted for different interactive behaviors to perform different display while the association relationship is output. The different interactions here may be interactions of different interaction types, such as, in a combat scenario, interactions of attack type, interactions of defending type, etc. A corresponding presentation graph is set for each interaction type, and a plurality of interaction behaviors can be included under one interaction type, for example, the interaction behaviors of the attack type can include on-ground attack, on-air attack and the like. In addition, the different interactions may be different numbers of interactions, i.e. one interaction corresponds to one presentation graphic, irrespective of the type of interaction.
In some embodiments, the terminal may further draw the first presentation graphic corresponding to each interaction behavior by: the terminal obtains execution time points of each interaction behavior; and marking the execution time points of the interaction behaviors on a first time axis, and drawing a first display graph of the corresponding interaction behaviors at the execution time points.
In practical implementation, the terminal can display corresponding first display graphics based on the execution sequence of each interaction behavior on a first time axis. The terminal can also acquire execution time points of the interaction behaviors, mark the execution time points of the interaction behaviors on a first time axis, and draw a first display graph of the corresponding interaction behaviors at the execution time points.
For example, referring to fig. 9, reference numeral 3 in the figure shows the execution time points 2021-06-28:36:59 of the interaction behavior S1, at which a presentation graph corresponding to the interaction behavior S1, i.e., a bar graph presented in the figure, is drawn. Thus, the execution time of each interaction behavior and the execution sequence among the interaction behaviors can be acquired more clearly.
In some embodiments, the terminal may also draw a line graph corresponding to the life values of other virtual objects by: when the number of other virtual objects is one, the terminal acquires the life values of the other virtual objects at different time points in the target time period; the target time period takes the execution time point of the first interaction action in at least two interaction actions as a starting time and the execution time point of the last interaction action as an ending time; and drawing a line graph corresponding to the life values of other virtual objects in the target time period on the first time axis.
In the actual implementation, when the target virtual object executes the interaction behavior in the virtual scene, the interaction behavior affects the life values of other virtual objects when the interaction behavior acts on other virtual objects. The terminal can draw life values of other virtual objects at different time points in the target time period in a line graph mode on the first time axis. The target time period can be understood to be a starting time at which the execution time of the first interaction of the at least two interactions is started and an ending time at which the execution time of the last interaction is ended, since the at least two interactions are in execution order. When the number of the other virtual objects is a plurality, the line diagrams are several, and the line diagrams of the different other virtual objects are displayed through the second display graphics with different display patterns, namely, the line diagrams of the different other virtual objects correspond to the different display patterns.
For example, referring to fig. 9, reference numeral 4 shows a line graph corresponding to the life value of one other virtual object in the target time period, when the number of other virtual objects is multiple, if there are 3 other virtual objects, 3 line graphs with different display styles may be drawn in the graph, and the line graph is used to indicate the life value of each other virtual object.
And 303, drawing a second time axis, and drawing a second display graph corresponding to each object state on the second time axis, wherein the position of the second display graph on the second time axis corresponds to the position of the corresponding first display graph on the first time axis.
In actual implementation, the terminal displays the interaction behavior and object states corresponding to the virtual objects after the interaction behavior occurs separately, draws a first time axis, and draws first display graphics corresponding to the interaction behaviors based on the execution sequence of the interaction behaviors on the first time axis; and drawing a second time axis, and drawing a second display graph corresponding to the object state corresponding to each interactive behavior based on the execution sequence of the interactive behavior on the second time axis. The position of the second display pattern on the second time axis corresponds to the position of the corresponding first display pattern on the first time axis.
For example, taking a virtual scene as an example of a combat scene in a combat game, referring to fig. 7, a first display graph for indicating each interaction behavior is shown as a vertical rectangle of different colors, a second display graph for indicating the state of each object is shown as a horizontal rectangle of different colors, a number 1 is shown as a first time axis, a number 2 is shown as a second time axis, and a terminal draws a vertical rectangle corresponding to each interaction behavior at an execution time point of each interaction behavior according to an execution sequence of each interaction behavior on the first time axis (where a height of the vertical rectangle may be used to indicate a magnitude of an injury value of an interaction behavior (combat behavior) having an injury attribute). Likewise, for the interaction behavior S2 (shown with number 3) at the execution time point 2021-06-28 15:37:23 on the first time axis, the object state of the player character is correspondingly presented by a lateral rectangle at the same time point of the second time axis (the height of the lateral rectangle may be used to indicate the duration of the current object state).
In some embodiments, the terminal may also output the number of interactions by: when the number of the interaction behaviors is at least two, the terminal determines the type of each interaction behavior; and counting the number of the interaction behaviors of each type, and outputting the number of the interaction behaviors of each type.
In actual implementation, the terminal can also count the number of the interaction behaviors, count the number of the interaction behaviors corresponding to each interaction type after determining the interaction type corresponding to the interaction behavior, and output a counting result through a man-machine interaction interface. Therefore, the using frequency of each interaction behavior can be intuitively displayed through the statistical result, and the method can be used for analyzing the using tendency of the player character (virtual object) to the interaction behavior.
For example, referring to fig. 10, fig. 10 is a schematic diagram of a statistical result visualization output of an interaction behavior provided by an embodiment of the present application, where a vertical axis shown in the figure is an interaction behavior identifier (skilld) for indicating each interaction behavior, and a horizontal axis shown in fig. 1 is a frequency (UseCount) for indicating each interaction behavior in a target period. Taking the virtual scene as an example of the fighting scene, each interactive behavior identifier shown on the vertical axis can be regarded as a skill identifier corresponding to the skill used by the player character in the fighting scene.
In some embodiments, the terminal may also output the sum of the injury attribute values for each interaction behavior by: when the interaction behaviors have the injury attribute, the terminal acquires the injury attribute value of each interaction behavior and counts the sum of the injury attribute values of each type of interaction behavior; and outputting the sum of the injury attribute values of the interaction behaviors of all types.
In actual implementation, when the interactive behaviors in the virtual scene have injury attributes, the terminal can count the sum of injury attribute values of the interactive behaviors and output corresponding statistics results through the target display graph. In practical application, when the sum of the injury attribute values of each interaction behavior is counted, the statistics can be performed based on the interaction single office, namely, the interaction single office is taken as a statistics unit, and the accumulated injury attribute value of each interaction behavior to the virtual object in the interaction single office is determined.
For example, referring to fig. 10, a statistical chart shown by a number 2 is an accumulated visualized diagram of injury attribute values, and shown in the figure is an accumulated value of injury attribute values for each interaction behavior output by a terminal in a target period. The vertical axis shown in the figure is an interaction behavior identification (skilid) for indicating each interaction behavior, and the horizontal axis shown in the figure is a sum of injury attribute values (accumulated value of injury attribute values) for indicating each interaction behavior in a target period.
In some embodiments, the terminal may also output the sum of the injury attribute values for each type of interaction behavior by: the terminal draws a horizontal axis for indicating the injury attribute value and draws a vertical axis marked with the identification of each interactive operation; drawing a target columnar graph corresponding to each type of interaction behavior based on the sum of injury attribute values of each type of interaction behavior in a coordinate system formed by a horizontal axis and a vertical axis; the height of the target columnar graph corresponds to the sum of injury attribute values of various types of interaction behaviors.
In practical implementation, the terminal may adopt a histogram form, when drawing the statistical result of the injury attribute values of each interaction behavior in the target time period, the terminal draws a horizontal axis for indicating the injury attribute values, draws a vertical axis marked with the identifier of each interaction operation, and then draws the sum of the injury attribute values of each interaction behavior in a coordinate system formed by the horizontal axis and the vertical axis in the form of the histogram.
For example, referring to fig. 10, a statistical chart shown by a number 2 is an accumulated visualized diagram of injury attribute values, and shown in the figure is an accumulated value of injury attribute values for each interaction behavior output by a terminal in a target period. In the figure, the vertical axis indicates an interaction behavior identifier for indicating each interaction behavior, and the horizontal axis indicates the sum of injury attribute values (accumulated value of injury attribute values) for indicating each interaction behavior in a target time period.
In some embodiments, the terminal may further count the number of times the target interaction is used: when the interaction behaviors have injury attributes, the terminal acquires injury attribute values of the interaction behaviors; screening out the interaction behaviors of which the injury attribute values reach the injury attribute value threshold from at least two interaction behaviors as target interaction behaviors; determining the type of each target interaction behavior; and counting the number of the target interaction behaviors of each type, and outputting the number of the target interaction behaviors of each type.
In actual implementation, a plurality of interactive behaviors can be screened, when the interactive behaviors have injury attributes, the terminal can screen target interactive behaviors based on the injury attribute values, count the number of the target interactive behaviors, draw the statistical result and output the statistical result through a man-machine interaction interface. For example, referring to fig. 10, reference numeral 3 shows a statistical schematic diagram of target interaction behavior, in which, taking a game scene as an example of a fighting scene in a fighting game, according to the interaction behavior of which the injury attribute value (the flying force) exceeds the injury attribute value threshold (the flying force threshold=100) 100, as the target interaction behavior (i.e. screening the flying force greater than 100), a horizontal axis for indicating the injury attribute value (the flying force) is drawn, and a vertical axis marked with the identification of each interaction behavior is drawn; drawing a target columnar graph corresponding to each type of target interaction behavior based on the flying force of each type of interaction behavior in a coordinate system formed by a horizontal axis and a vertical axis; the height of the target columnar graph corresponds to the impact force of each type of interaction behavior.
In some embodiments, referring to fig. 11, fig. 11 is a schematic diagram of determining a target association relationship according to an embodiment of the present application, and is described with reference to steps shown in fig. 11.
In step 401, the terminal analyzes the interaction data recorded in the log data to obtain the interaction behavior of other virtual objects executed by aiming at the target virtual object and the object state of the target virtual object after the interaction behavior of each other virtual object is executed.
In actual implementation, in the log data cached by the terminal to which the target virtual object belongs, interaction data of interaction actions executed by other virtual objects to the target virtual object is recorded. The terminal analyzes the interactive data in the log data to obtain the interactive behavior executed by the target virtual object when the target virtual object receives other virtual objects and the object state of the target virtual object when the interactive behavior acts on the target virtual object.
Step 402, establishing a target association relationship between the interaction behavior executed by each other virtual object and the object state of the target virtual object, and outputting the target association relationship.
When the target virtual object executes the interactive action, other virtual objects can make corresponding interactive response to the interactive action, namely, other virtual objects can execute another interactive action to the target virtual object, namely, in an actual virtual scene, in a target time period, the target virtual object is not always in a state of actively executing the interactive action, but after the target virtual object executes an interactive action, other virtual objects receiving the interactive action can give the interactive action a response to the interactive action. In the whole back and forth interaction process, for the target virtual object, a target association relationship between the interaction behavior executed by each other virtual object and the object state of the target virtual object can be established, and the target association relationship is output in a visual mode.
Taking a virtual scene as an example of a fighting scene in a fighting game, a terminal analyzes log data of a player a to obtain that a player character controlled by the player a hits a player character controlled by the player B with a skill A1 at a time point t1, the player character a is in a hard straight state of the skill a, the player character B is in a hard straight state of the skill A1, then the player character B releases the skill B1 to cope with the skill A1, the player character B is in a hard straight state of the skill B1, and the player character a is in a hard straight state of the skill B1. At this time, the terminal may establish an association relationship between the use skill A1 for the player character a, the use of the skill A1 in the hard straight state, the acceptance of the skill B1, and the hard straight state for the skill B1; the terminal can establish an association relationship among the receiving skill A1 for the player character B, the hard straight state for the skill A1, the using skill B1, and the hard straight state for the skill B1. And outputting the association relation in a visual mode, such as outputting in a bar graph mode.
In some embodiments, after the terminal outputs the association relationship, the value of the statistical parameter may also be output according to the following manner: the terminal acquires at least one statistical parameter of a target virtual object in a virtual scene; wherein the statistical parameter comprises at least one of: the utilization rate of the interaction behavior, the interaction success rate of the interaction behavior and the interaction attribute value accumulation amount of the interaction behavior; outputting the value of the at least one statistical parameter.
In actual implementation, after analyzing the log data, the terminal can automatically output the statistical values of parameters such as the utilization rate of the interaction behavior, the interaction success rate of the interaction behavior, the cumulative amount of the interaction attribute values of the interaction behavior, and the like, or can respond to the selection operation of the functional items of the statistical parameters in the human-computer interaction interface to output the values of the target statistical parameters.
By applying the embodiment of the application, after the log data aiming at the target virtual object in the virtual scene is loaded, the log data is analyzed to obtain the interactive behaviors executed by the target virtual object in the virtual scene and the object state information aiming at each interactive behavior, and the visual graph (playback graph) of the virtual scene is drawn based on the execution sequence of each interactive behavior, so that the effect of each interactive behavior in the virtual scene is quickly known and analyzed through the visual graph, the user can conveniently obtain the overall view of the interactive process in the virtual scene, and the playback video for a longer time is not needed; the acquired analysis data can be used for more accurately identifying information invisible to naked eyes; and the statistical analysis of the information such as the use quantity, the injury attribute value and the like of each interaction behavior is convenient for carrying out the statistical analysis on the interaction behavior of the target interaction object.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Taking the virtual scene as an example in the 1V1 fighting game as an illustration, in the 1V1 fighting game, a player character (i.e., the target virtual object in the above) controlled by the player performs multiple combat with other player characters (i.e., other virtual objects in the above), and outputs corresponding log data. In the related art, the whole field fight and the multi-field fight process are known and analyzed by a video playback mode. However, confirming the combat process by video often has the following problems: time consumption, inaccurate acquired information, unparameterized and stateful information in the process, insufficient precision, failure to achieve the problems of frame-by-frame level according to events and the like. Based on the above, the embodiment of the application provides a data processing method suitable for a virtual fight scene, which is based on (fight) log data in the virtual fight scene, and visualizes the fight process of a player and other players in the virtual fight scene, so that the influence of the whole fight and the multi-field fight process can be known and analyzed through the view by drawing a playback graph for the virtual fight scene according to time sequence by acquiring the data corresponding to the interaction behaviors of each player character in the fight process.
First, a method for processing data in a virtual scene provided by the embodiment of the application is described from a product side.
In actual implementation, the client side to which the player character belongs analyzes log data of the player character in the virtual combat scene by loading the log data of the player character in the virtual combat scene to obtain combat behavior data (namely the interaction behavior mentioned above) of the player character in the virtual combat scene, and draws events such as a hit event, a skill event, state transition (namely the transition of the object state mentioned above) and the like among the player characters recorded in the combat behavior data based on the execution sequence of each event and key frames corresponding to each event; the method achieves the purpose of quickly understanding the fight process and behavior change information of two parties participating in fight in a virtual fight scene through view.
Illustratively, referring to FIG. 12, shown is a combat process of the player character 12 in a virtual combat scene, based on the order of execution of the events, and corresponding state change information. From the figure, the overall view of the entire combat process of the player character 12 in the virtual combat scene can be seen, and the skill list, the order of using skills, and the state change process for each skill used by the player character 12 in the combat process can be intuitively displayed.
In actual implementation, the terminal can visually display hit events in the combat process in a histogram form, hit events of different skill actions can be graphically drawn by adopting different colors based on different belonged skill identifications, and the height of a rectangle in the histogram can be used for representing the magnitude of an injury attribute value (realhirtvalue in the graph) caused by the hit events; the vital information of the counterpart (vital value, currentEnemmyPercent in the figure) can also be represented by a line chart.
In practical implementation, the terminal may also display a state diagram of an object state corresponding to each player character in a visual manner, referring to fig. 12, where reference numeral 2 shows a state conversion process of the player character, and it should be noted that, for the state of the player character (i.e., the aforementioned object state), only one state can be located at a certain moment, and it is impossible to locate at two states at the same moment, i.e., the character state of the player character has uniqueness with respect to time. The character status of the player character can be changed to confirm the character of the player character at the corresponding time point, for example, what type of instruction is made, whether information such as input is received in the air (InAirHurt), and when the hit event of the player character a and the status time comparison of the player character B occur, information such as the influence of each skill and the number of hard frames, and finally the cause of defeat can be confirmed.
In actual implementation, the hit (state) of the player character A (B) is compared with the hit (state) of the player character B (A), so that a user can conveniently obtain the overall view of the fight process under the battle without playing back the video by watching the battle for a long time; and based on the method, the acquired analysis data can be more accurate and can identify information invisible to naked eyes. For example, referring to fig. 12, fig. 12 is a schematic view of visualization of log data provided by the embodiment of the present application, in which a player corresponding to a player character 12 imports a corresponding log file at its own game client, and then presents a man-machine interaction interface, where a player character 11 performs a combat character with the player character 12 in a virtual combat scene.
Next, a method for processing data in a virtual scene provided by the embodiment of the present application is described from a technical side, and the method is mainly a processing method facing visualization of combat process data, and mainly includes 3 parts: 1) Acquiring various event change information of a combat process in a virtual combat scene; to collision events and state machine transitions, etc.; the game machine can be customized based on different games of docking; 2) Calculating and interpreting the related time information to obtain effective information related to the player character; 3) And carrying out visual display and statistical display on the effective information according to the time sequence.
In a virtual action game, events and states of both player characters a, B are mutually affected, e.g., an event of a using skill and a skill hard state of a are mutually associated; a hit event is associated with B's hard-hit status; the aim of the imaging is to show the time of the two sides looking independent of each other according to the logic association of the game design and to quickly find out the points with problems.
For different events and states, for the jump process of the character state machine, both player characters A, B are consistent in trigger frames; in addition, for describing player character a (which may refer to an attacker), each state time and duration frame of player character B (which may refer to a victim) is a range that has been difficult to accurately express by the past evaluation test method.
Next, an exemplary description is made of scenario one: analyzing the continuous quality of the player characters: the continuous invitation is characterized in that a player character A has a plurality of attack events, and a player character B cannot break loose and is in a hard and straight state; so if an intermediate state change occurs, the explanation is incorrect connection sign or disconnection. Scene II: the maximum continuous recruitment path is automatically acquired in a single combat office, the maximum controlled frame duration can be acquired through single combat office information, and the corresponding skill use path is acquired through an associated attack hit event according to the analysis principle in the first scene. Scene III: the skill use and skill efficiency and other analysis of the player character can quickly obtain the use rate, hit rate, injury amount accumulation and the like of each skill by obtaining the statistical analysis of the relevant attribute of each skill; for this type of play (different from the traditional fighting game type), the effect produced by each hit will have different numerical effects based on the health status of the other party; in addition, for fighting type games, simple numerical comparison is not significant, and numerical efficiency analysis is performed mainly through the game process.
In practical implementation, referring to fig. 13, fig. 13 is a flowchart of a data processing method in a virtual scene according to an embodiment of the present application, and the data processing method in the virtual scene is described with reference to the steps shown in fig. 13.
And step 1, data acquisition. In actual implementation, in the client code, various log outputs of key events about actions are found, such as hit, state transition, action jump and the like; the log switch can control the output of the part of log, and the real person log is automatically collected as a tool analysis object. Adding log output through key functions and part of key functions in the game by means of hook, such as hit event, state machine jump, etc.; the event with the event distribution notification is obtained through acquiring event distribution information, and the event internal types of some internal running states are additionally packaged and log output is increased on corresponding functions; the method is used for acquiring the single-office process log.
Referring to fig. 14, fig. 14 is a schematic diagram of log data of a corresponding interaction pair provided in an embodiment of the present application, where the data obtained in the diagram is shown in the figure, and the data in the red frame is the respective key data of the corresponding type.
In actual implementation, the data structure corresponding to the skills released for the player character may be created by:
/>
the data structure corresponding to the state of the character can be constructed by:
/>
and 2, data segmentation. In actual implementation, single-office segmentation and version-based data classification are performed based on a single-office keyword (gameplystart, gamePalyEnd) in a log; a large number of logs are split into data units in units of a single office. And (3) carrying out single-office segmentation based on the flow logs in the data logs, carrying out type division based on the logs of different types, and storing the logs of the same type (such as hit events, state events and the like) in the same single office into a corresponding designated data set for analysis.
And 3, visual analysis. In actual practice, the single office process log is drawn and analyzed by using a graphical representation of the web; rapid analysis and reference can be made to character parameters, states, and behavioral trends.
For example, referring to fig. 15, fig. 15 is a schematic diagram of visual analysis when the interactive behavior provided by the embodiment of the present application is continuously implemented, as shown in the drawing, there is 2 frames of solution control in the state of the hit side corresponding to a section 3 of continuous hit (the hit event shown by the number 1 in the drawing) performed on the ground, that is, there is a problem in continuous hit and hard straight time here; the same skill release is carried out in the air, and the corresponding controlled time is normal; the tool can accurately and quickly find out the problems; therefore, in the process of each single office, the single offices are identified and split in log data through the flow direction, process information in each single office is acquired, drawing of different dimensions is carried out, and triggering and ending of each event are marked according to a unified time axis.
Referring to fig. 16, fig. 16 is a diagram illustrating an exemplary graphical rendering of log data of an office with one interaction provided by an embodiment of the present application. For the main graph, the key parameters are used for knowing the sequence and the relevance of each event and are mainly identified through frame numbers or time, so that the main graph (left side) corresponds to hit events and current life values of players through the use of bar graphs and line graphs; drawing state diagram transformation through a custom diagram; regarding the design of the state transition diagram, drawing respective tracks through state classification, drawing transverse columns with start time as a starting point pos.x and end time as an ending point pos.y to characterize corresponding state durations; for the upper and lower pictures in the main picture, acquiring corresponding key information such as skill id, duration and the like through a logic association relation; for the auxiliary graph (statistical graph), a histogram is used to represent a frequency comparison representation of the analysis data corresponding to the corresponding single office.
And 4, data statistics. In actual implementation, the combat data collected by each version is statistically stored in the database based on the key data of each version, each role and each skill, so that statistical analysis, such as change of skill use tendency and inter-version player use tendency, can be conveniently performed.
In practical implementation, automatic inspection and warning may be performed on some key data, such as the duty ratio of some skills, etc., referring to fig. 17, fig. 17 is a schematic diagram of notification information provided by the embodiment of the present application, where the player character with the character id=18 shown in the figure hits information, skill event, state transformation, etc. related information, such as the used hit skill is 1800120, and the injury attribute value caused by the hit information is 4.63, etc. in the first Round (Round 1) of combat process in the virtual combat scene.
By applying the embodiment of the application, after log data in a virtual fight scene are loaded, events such as hit events, skill events, state transformation and the like among player characters are drawn based on time sequences and key frames, so that fight processes and behavior change information of two fight parties in a fight can be rapidly understood through image drawing, and the value assessment of action skills can be realized; the combat processes of the A/B parties of the designated order can be quickly known; skill tendencies of players can be analyzed; confirming the correctness of action jump; dynamic data evaluation for role strength balance is achieved.
It will be appreciated that in the embodiments of the present application, related data such as user information is involved, and when the embodiments of the present application are applied to specific products or technologies, user permissions or agreements need to be obtained, and the collection, use and processing of related data need to comply with relevant laws and regulations and standards of relevant countries and regions.
Continuing with the description below of an exemplary architecture of the data processing device 555 implemented as a software module in a virtual scene of a laser provided in accordance with an embodiment of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the data processing device 555 in the virtual scene of the memory 550 may include:
the obtaining module 5551 is configured to obtain log data of a target virtual object in a virtual scene, where the log data is used to record interaction data of the target virtual object in the virtual scene for interacting with other virtual objects;
the analysis module 5552 is configured to analyze the interaction data recorded in the log data to obtain at least one interaction behavior executed by the target virtual object in the virtual scene, and an object state of the other virtual objects after the execution of each interaction behavior;
and the output module 5553 is configured to establish an association relationship between each interaction behavior and the object states of the other virtual objects, and output the association relationship.
In some embodiments, the output module is further configured to determine a first presentation graphic for identifying each of the interactions and a second presentation graphic for identifying each of the object states; and drawing the graph based on the first display graph and the second display graph to obtain a target display graph for indicating the association relationship.
In some embodiments, the output module is further configured to determine an execution order of each of the interactions when the number of interactions is at least two; drawing a first time axis, and drawing first display graphics corresponding to each interaction behavior on the first time axis, wherein the positions of the first display graphics on the first time axis correspond to the execution sequence of the corresponding interaction behaviors; drawing a second time axis, and drawing a second display graph corresponding to each object state on the second time axis; the position of the second display graph on the second time axis corresponds to the position of the corresponding first display graph on the first time axis.
In some embodiments, the first display graph is a columnar graph, and the output module is further configured to obtain an injury attribute value of each of the interactions when the interactions have an injury attribute; and drawing columnar graphs corresponding to the interaction behaviors on the first time axis based on the injury attribute values of the interaction behaviors, wherein the heights of the columnar graphs are used for indicating the magnitude of the injury attribute values.
In some embodiments, the output module is further configured to obtain an execution time point of each of the interaction behaviors; marking execution time points of the interaction behaviors on the first time axis, and drawing a first display graph of the corresponding interaction behaviors at the execution time points.
In some embodiments, the output module is further configured to obtain a life value of the other virtual object at a different time point within the target time period; the target time period takes the execution time point of the first interaction action in the at least two interaction actions as a starting time and the execution time point of the last interaction action as an ending time; and drawing a line graph corresponding to the life values of the other virtual objects in the target time period on the first time axis.
In some embodiments, the output module is further configured to determine a type of each of the interactions when the number of interactions is at least two; and counting the number of the interaction behaviors of each type, and outputting the number of the interaction behaviors of each type.
In some embodiments, the output module is further configured to obtain an injury attribute value of each of the interactions when the interactions have an injury attribute, and count a sum of the injury attribute values of each of the interactions of the types; and outputting the sum of the injury attribute values of the interaction behaviors of the types.
In some embodiments, the output module is further configured to draw a horizontal axis for indicating an injury attribute value, and draw a vertical axis marked with an identification of each of the interactions; drawing a target columnar graph corresponding to each type of interaction behavior based on the sum of injury attribute values of each type of interaction behavior in a coordinate system formed by the horizontal axis and the vertical axis; the height of the target columnar graph corresponds to the sum of injury attribute values of the interaction behaviors of the types.
In some embodiments, the output module is further configured to obtain an injury attribute value of each of the interactions when the interactions have an injury attribute; screening out the interaction behaviors of which the injury attribute values reach the injury attribute value threshold from the at least two interaction behaviors as target interaction behaviors; in a corresponding manner,
in some embodiments, the output module is further configured to determine a type of each of the target interactions; outputting the number of the target interaction behaviors of each type.
In some embodiments, the output module is further configured to analyze the interaction data recorded in the log data to obtain interaction behaviors performed by the other virtual objects on the target virtual object, and an object state of the target virtual object after the interaction behaviors of the other virtual objects are performed; and establishing a target association relation between the interaction behaviors executed by the other virtual objects and the object states of the target virtual objects, and outputting the target association relation.
In some embodiments, when the log data records interaction data of the target virtual object interacting with the other virtual objects in at least two interaction pairs, the data processing device in the virtual scene further includes a data splitting module, after obtaining the log data of the target virtual object in the virtual scene, the data splitting module is configured to split the log data to obtain segmented log data corresponding to each interaction pair, where the segmented log data records interaction data of the target virtual object interacting with other virtual objects in the corresponding interaction pair;
in some embodiments, the analysis module is further configured to analyze, for each piece of segmented log data corresponding to the interaction pair, the interaction data recorded in the segmented log data, to obtain at least one interaction behavior of the target virtual object executed in the corresponding interaction pair, and an object state of the other virtual object after each interaction behavior is executed.
In some embodiments, the output module is further configured to obtain at least one statistical parameter of the target virtual object in the virtual scene; wherein the statistical parameter comprises at least one of: the utilization rate of the interaction behavior, the interaction success rate of the interaction behavior and the interaction attribute value accumulation amount of the interaction behavior; outputting the value of the at least one statistical parameter.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the data processing method in the virtual scene according to the embodiment of the application.
An embodiment of the present application provides a computer-readable storage medium storing executable instructions, in which the executable instructions are stored, which when executed by a processor, cause the processor to perform a data processing method in a virtual scene provided by the embodiment of the present application, for example, a data processing method in a virtual scene as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the application, after log data of a target virtual object in a virtual scene is loaded, the log data is analyzed to obtain the interactive behaviors executed by the target virtual object in the virtual scene and the object state information of each interactive behavior, and a visual graph (playback graph) of the virtual scene is drawn based on the execution sequence of each interactive behavior, so that the effect of each interactive behavior in the virtual scene is quickly known and analyzed through the visual graph, so that a user can conveniently obtain the overall view of the interactive process in the virtual scene without playing back video for a long time; the acquired analysis data can be used for more accurately identifying information invisible to naked eyes; and the statistical analysis of the information such as the use quantity, the injury attribute value and the like of each interaction behavior is convenient for carrying out the statistical analysis on the interaction behavior of the target interaction object.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method of data processing in a virtual scene, the method comprising:
acquiring log data of a target virtual object in a virtual scene, wherein the log data is used for recording interaction data of the target virtual object in the virtual scene for interacting with other virtual objects;
analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of the other virtual objects after the interaction behaviors are executed;
and establishing association relations between the interaction behaviors and the object states of the other virtual objects, and outputting the association relations.
2. The method of claim 1, wherein the outputting the association relationship comprises:
determining a first presentation graphic for identifying each of the interactive behaviors and a second presentation graphic for identifying each of the object states;
And drawing the graph based on the first display graph and the second display graph to obtain a target display graph for indicating the association relationship.
3. The method of claim 2, wherein the performing graphics drawing based on the first presentation graphic and the second presentation graphic to obtain a target presentation graphic for indicating the association relationship comprises:
when the number of the interactive behaviors is at least two, determining the execution sequence of each interactive behavior;
drawing a first time axis, and drawing first display graphics corresponding to each interaction behavior on the first time axis, wherein the positions of the first display graphics on the first time axis correspond to the execution sequence of the corresponding interaction behaviors;
drawing a second time axis, and drawing a second display graph corresponding to each object state on the second time axis;
the position of the second display graph on the second time axis corresponds to the position of the corresponding first display graph on the first time axis.
4. The method of claim 3, wherein the first presentation graphic is a columnar graphic, the method further comprising:
When the interaction behaviors have injury attributes, acquiring injury attribute values of the interaction behaviors;
and drawing a first display graph corresponding to each interaction behavior on the first time axis, wherein the first display graph comprises:
and drawing columnar graphs corresponding to the interaction behaviors on the first time axis based on the injury attribute values of the interaction behaviors, wherein the heights of the columnar graphs are used for indicating the magnitude of the injury attribute values.
5. The method of claim 3, wherein the drawing, on the first time axis, a first presentation graph corresponding to each of the interactions includes:
acquiring execution time points of each interaction behavior;
marking execution time points of the interaction behaviors on the first time axis, and drawing a first display graph of the corresponding interaction behaviors at the execution time points.
6. A method as claimed in claim 3, wherein the method further comprises:
acquiring life values of the other virtual objects at different time points in a target time period;
the target time period takes the execution time point of the first interaction action in the at least two interaction actions as a starting time and the execution time point of the last interaction action as an ending time;
And drawing a line graph corresponding to the life values of the other virtual objects in the target time period on the first time axis.
7. The method of claim 1, wherein the method further comprises:
when the number of the interactive behaviors is at least two, determining the type of each interactive behavior;
and counting the number of the interaction behaviors of each type, and outputting the number of the interaction behaviors of each type.
8. The method of claim 7, wherein the method further comprises:
when the interactive behaviors have injury attributes, acquiring injury attribute values of the interactive behaviors, and counting the sum of the injury attribute values of the interactive behaviors of the types;
and outputting the sum of the injury attribute values of the interaction behaviors of the types.
9. The method of claim 8, wherein said outputting a sum of injury attribute values for each of said types of said interaction behavior comprises:
drawing a horizontal axis for indicating the injury attribute value, and drawing a vertical axis marked with the identification of each interactive operation;
drawing a target columnar graph corresponding to each type of interaction behavior based on the sum of injury attribute values of each type of interaction behavior in a coordinate system formed by the horizontal axis and the vertical axis;
The height of the target columnar graph corresponds to the sum of injury attribute values of the interaction behaviors of the types.
10. The method of claim 7, wherein the method further comprises:
when the interaction behaviors have injury attributes, acquiring injury attribute values of the interaction behaviors;
screening out the interaction behaviors of which the injury attribute values reach the injury attribute value threshold from the at least two interaction behaviors as target interaction behaviors;
the determining the type of each interaction behavior comprises the following steps:
determining the type of each target interaction behavior;
the outputting the number of interactions of each of the types includes:
outputting the number of the target interaction behaviors of each type.
11. The method of claim 1, wherein the method further comprises:
analyzing the interaction data recorded in the log data to obtain the interaction behaviors executed by the other virtual objects aiming at the target virtual object and the object states of the target virtual object after the interaction behaviors of the other virtual objects are executed;
and establishing a target association relation between the interaction behaviors executed by the other virtual objects and the object states of the target virtual objects, and outputting the target association relation.
12. The method of claim 1, wherein after the outputting the association relationship, the method further comprises:
acquiring at least one statistical parameter of the target virtual object in the virtual scene;
wherein the statistical parameter comprises at least one of: the utilization rate of the interaction behavior, the interaction success rate of the interaction behavior and the interaction attribute value accumulation amount of the interaction behavior;
outputting the value of the at least one statistical parameter.
13. A data processing apparatus in a virtual scene, the apparatus comprising:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring log data of a target virtual object in a virtual scene, and the log data is used for recording interaction data of the target virtual object in the virtual scene for interacting with other virtual objects;
the analysis module is used for analyzing the interaction data recorded in the log data to obtain at least one interaction behavior of the target virtual object executed in the virtual scene and object states of the other virtual objects after the interaction behaviors are executed;
and the output module is used for establishing the association relation between each interaction behavior and the object states of the other virtual objects and outputting the association relation.
14. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the data processing method in a virtual scene according to any of claims 1 to 12 when executing executable instructions stored in said memory.
15. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of data processing in a virtual scene according to any one of claims 1 to 12.
CN202210447672.1A 2022-04-26 2022-04-26 Data processing method, device, equipment and storage medium in virtual scene Pending CN116983620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210447672.1A CN116983620A (en) 2022-04-26 2022-04-26 Data processing method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210447672.1A CN116983620A (en) 2022-04-26 2022-04-26 Data processing method, device, equipment and storage medium in virtual scene

Publications (1)

Publication Number Publication Date
CN116983620A true CN116983620A (en) 2023-11-03

Family

ID=88532664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210447672.1A Pending CN116983620A (en) 2022-04-26 2022-04-26 Data processing method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN116983620A (en)

Similar Documents

Publication Publication Date Title
CN107213641B (en) Video synchronized with in-game telemetry
CN104618797B (en) Information processing method, device and client
US20100138775A1 (en) Method, device and system, for extracting dynamic content from a running computer application
US9075901B2 (en) System and method to visualize activities through the use of avatars
US20110313550A1 (en) Selection system for gaming
US9421468B2 (en) Rendering of artifacts in a virtual universe environment in response to user tags
CN104503877A (en) Method and device for evaluating intelligent terminal
CN112691381B (en) Rendering method, device and equipment of virtual scene and computer readable storage medium
KR102161137B1 (en) Method and system for game data collection
US20220280870A1 (en) Method, apparatus, device, and storage medium, and program product for displaying voting result
US20210402301A1 (en) Server-Based Mechanics Help Determination from Aggregated User Data
Kohwalter et al. Capturing game telemetry with provenance
CN112783660B (en) Resource processing method and device in virtual scene and electronic equipment
CN113975812A (en) Game image processing method, device, equipment and storage medium
Kohwalter et al. Understanding game sessions through provenance
CN116983620A (en) Data processing method, device, equipment and storage medium in virtual scene
CN112295239A (en) Historical message prompting method and device, storage medium and electronic equipment
CN114917590B (en) Virtual reality game system
CN113769395B (en) Virtual scene interaction method and device and electronic equipment
CN113031846B (en) Method and device for displaying description information of task and electronic equipment
US8972476B2 (en) Evidence-based virtual world visualization
US20240024778A1 (en) Updating gameplay parameters based on parameters shown in gameplay video
US11972244B2 (en) Method and apparatus for improving a mobile application
CN115623270A (en) Interactive information processing method and device, electronic equipment and storage medium
CN117974338A (en) Role grouping method, device, terminal, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination