WO2023246270A1 - 信息处理方法、装置和存储介质及电子设备 - Google Patents

信息处理方法、装置和存储介质及电子设备 Download PDF

Info

Publication number
WO2023246270A1
WO2023246270A1 PCT/CN2023/089654 CN2023089654W WO2023246270A1 WO 2023246270 A1 WO2023246270 A1 WO 2023246270A1 CN 2023089654 W CN2023089654 W CN 2023089654W WO 2023246270 A1 WO2023246270 A1 WO 2023246270A1
Authority
WO
WIPO (PCT)
Prior art keywords
game
time unit
prediction information
executed
virtual
Prior art date
Application number
PCT/CN2023/089654
Other languages
English (en)
French (fr)
Inventor
蒙朦
衡建宇
黄杰怡
彭云韬
王远琴
叶振斌
邓民文
李思琴
汪文俊
刘林
赖林
覃洪杨
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023246270A1 publication Critical patent/WO2023246270A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • the present application relates to the field of computers, and specifically to information processing technology.
  • Embodiments of the present application provide an information processing method, device, storage medium and electronic equipment to at least solve the technical problem of insufficient display of information.
  • an information processing method is provided.
  • the method is executed by an electronic device, including: displaying a running screen corresponding to a target virtual game in a first time unit, wherein the target virtual game is at least one simulation A virtual game in which the object participates.
  • the above-mentioned simulation object is a virtual object driven by artificial intelligence and used to simulate and control virtual characters participating in the above-mentioned target virtual game; obtain the game reference data corresponding to the above-mentioned running screen, wherein the above-mentioned game reference data is the above-mentioned The game data fed back by the virtual character when participating in the target virtual game in the above-mentioned first time unit; based on the above-mentioned game reference data, the execution prediction information corresponding to the candidate operations to be executed is displayed, where the candidate operations to be executed are to be executed in The operation performed by the above-mentioned virtual character in the second time unit, the above-mentioned execution prediction information is used to provide an auxiliary reference related to the above-mentioned game reference data for the control command to be initiated, the above-mentioned control command to be initiated is the object to be simulated by the above-mentioned An instruction initiated in the second time unit and used to control the virtual character to perform the candidate operation.
  • an information processing device is also provided.
  • the above device is deployed on an electronic device and includes: a first display unit for displaying a running screen corresponding to the target virtual game in a first time unit,
  • the above-mentioned target virtual game is a virtual game in which at least one simulation object participates
  • the above-mentioned simulation object is a virtual object driven by artificial intelligence and used to simulate and control the virtual character to participate in the above-mentioned target virtual game
  • the acquisition unit is used to obtain the above-mentioned running screen corresponding
  • the game reference data wherein the game reference data is the game data fed back when the virtual character participates in the target virtual game in the first time unit
  • the second display unit is used to based on the game reference data, Display the execution prediction information corresponding to the candidate operation to be executed, where the candidate operation to be executed is an operation to be executed by the above-mentioned virtual character in the second time unit, and the above-mentioned execution prediction information is used to provide the
  • a computer-readable storage medium includes a stored computer program, wherein the computer program can execute the above information processing method when run by an electronic device.
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the above information processing method.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program.
  • Information processing methods including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program.
  • the running screen corresponding to the target virtual game in the first time unit can be displayed, wherein the first time unit can be the current time unit, and the above-mentioned target virtual game is at least one A virtual game in which simulated objects participate.
  • the above-mentioned simulation objects are virtual objects driven by artificial intelligence and used to simulate and control virtual characters to participate in the above-mentioned target virtual game.
  • the game reference data corresponding to the above-mentioned running screen is obtained, wherein the above-mentioned game reference data is the game data fed back by the above-mentioned virtual character when the above-mentioned first time unit participates in the above-mentioned target virtual game.
  • execution prediction information corresponding to the candidate operation to be executed is displayed, where the candidate operation to be executed is an operation to be executed by the virtual character in the second time unit, and the execution prediction information is used to Provide an auxiliary reference related to the above-mentioned game reference data for the control command to be initiated, which is to be initiated by the above-mentioned simulation object in the above-mentioned second time unit, and is used to control the above-mentioned virtual character to execute the above-mentioned candidate Instructions for operation, the above-mentioned second time unit is after the above-mentioned first time unit. That is to say, the execution prediction information can be used as the basis for deciding which candidate operation to execute.
  • Figure 1 is a schematic diagram of an application environment of an information processing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of the flow of an information processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an information processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 8 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 10 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 12 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 13 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 14 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 15 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 16 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 17 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 18 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 19 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 20 is a schematic diagram of another information processing method according to an embodiment of the present application.
  • Figure 21 is a schematic diagram of an information processing device according to an embodiment of the present application.
  • Figure 22 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application applies Artificial Intelligence (AI) to the scene of a virtual game, and uses AI to make decisions during the running of the virtual game to determine the operations that the simulation object will control the virtual character to perform in the next time unit.
  • AI Artificial Intelligence
  • an information processing method is provided.
  • the above information processing method can be, but is not limited to, applied in the environment as shown in Figure 1 . It may include, but is not limited to, a user device 102 and a server 112.
  • the user device 102 may include, but is not limited to, a display 108, a processor 106 and a memory 1004.
  • the server 112 includes a database 114 and a processing engine 116.
  • the user device 102 obtains the running screen 1002 corresponding to the target virtual game in the first time unit;
  • the server 112 obtains the game reference data corresponding to the running screen 1002 from the database 114; furthermore, the server 112 obtains the execution prediction information corresponding to the candidate operation to be executed based on the game reference data through the processing engine 116;
  • S112-S114 Send the execution prediction information to the user equipment 102 through the network 110.
  • the user equipment 102 processes the execution prediction information on the display 108 through the processor 106, and stores the execution prediction information in the memory 104.
  • the above steps can be completed with the assistance of the server, that is, the server performs steps such as obtaining game reference data and obtaining prediction information, thereby reducing the processing pressure on the server.
  • the user equipment 102 includes but is not limited to handheld devices (such as mobile phones), laptop computers, desktop computers, vehicle-mounted equipment, etc. This application does not limit the specific implementation of the user equipment 102.
  • the information processing method includes:
  • S202 display the running screen corresponding to the target virtual game in the first time unit, wherein the target virtual game is a virtual game in which at least one simulation object participates, and the simulation object is an artificial intelligence-driven device used to simulate and control virtual characters participating in the target virtual game. virtual object;
  • the control instructions provide auxiliary references related to the game reference data.
  • the control instructions to be initiated are instructions to be initiated by the simulated object in the second time unit and are used to control the virtual character to perform candidate operations.
  • the second time unit is After the first time unit.
  • the above information processing method can be, but is not limited to, applied in virtual game scenarios involving artificial intelligence, such as AI versus AI virtual games (target virtual games), AI versus real people Virtual games (target virtual games), etc.; further taking the virtual game of AI playing against real people as an example, during the human-machine battle, in addition to displaying the AI’s decision-making data during the battle, some factors that affect the decision-making can also be displayed in a clear and easy-to-understand manner. Key data, such as the operation process and returns of the AI neural network, can better help users participating in the battle or users watching the battle to fully learn the AI's decision-making methods.
  • artificial intelligence such as AI versus AI virtual games (target virtual games), AI versus real people Virtual games (target virtual games), etc.
  • Key data such as the operation process and returns of the AI neural network, can better help users participating in the battle or users watching the battle to fully learn the AI's decision-making methods.
  • the AI in each camp will use computer vision, machine learning and other technologies to independently perform game tasks in the virtual game. Striving to achieve the final victory of the virtual game; before the AI initiates an operation instruction, it needs to use computer vision to collect information such as the game status in the virtual game, and use machine learning to make decisions on which operation instructions to execute, further integrating the above decision-making process
  • the information is presented on the viewing interface in an easy-to-understand manner, and combined with the game battle screen to help watching users fully understand the AI's decision-making methods, increase the viewing of AI battles, and make the AI explainable.
  • the time unit may be a time period within a preset time range, and within this time period, the target virtual game may, but is not limited to, include at least one frame of running screen.
  • the embodiment of the present application does not limit the preset time range.
  • the preset time range may be, for example, 1 second, 1 minute, 1 hour, 5 seconds, 10 seconds, 2 minutes, etc., and can be set according to actual needs. It can be understood that the smaller the preset time range, the shorter the time period represented by the time unit and the closer it is to a moment.
  • the time unit may be a time period including one frame of running picture.
  • the running screen corresponding to the first time unit can be, but is not limited to, understood as the current frame running screen of the target virtual game
  • the running screen corresponding to the second time unit can be but not limited to. It is not limited to being understood as the next running frame of the target virtual game.
  • the running screen may be, but is not limited to, understood as a game screen in the virtual scene of the target virtual game.
  • the process of obtaining the game reference data corresponding to the running screen can be, but is not limited to, using computer vision to identify, collect, and measure the running screen and other machine vision, and further perform Graphics processing can, but is not limited to, involve image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronization Positioning and map construction technologies.
  • the simulation object is a virtual object driven by artificial intelligence and used to simulate and manipulate virtual characters to participate in the target virtual game, or can be understood as a digital computer or a machine controlled by a digital computer.
  • the game reference data is the game data fed back when the virtual character participates in the target virtual game for the first time unit, such as game status data, game resource data, etc.
  • the game state data may be, but is not limited to, used to represent individual states of virtual characters participating in the target virtual game, and/or partial states of multiple virtual characters participating in the target virtual game, and/or represent various camps participating in the target virtual game.
  • the overall status of the game resource data can be used to represent, but is not limited to, the held status, unheld status, distribution status, etc.
  • the status of virtual resources that have been obtained by virtual characters such as the status of virtual resources that have been obtained by virtual characters (virtual The held state of the resource), the state of the virtual resource that has not been obtained by the virtual character (unheld state), the distribution of virtual resources in the virtual scene of the target virtual game (distribution state), etc.
  • the candidate operations to be performed by the virtual character in the second time unit can be, but are not limited to, understood as the virtual character has not yet performed in the current time unit (the first time unit), but There are candidate operations that may be performed in the next time unit (the second time unit).
  • the virtual character 304 controlled by the simulation object 302 may be performed in the next time unit (the second time unit).
  • the candidate operations to be executed include operation A, operation B, and operation C. After obtaining the execution prediction information 306, the simulation object 302 will decide the next time unit (second time unit) among operation A, operation B, and operation C. time unit) to perform the operation.
  • the simulation object 302 decides on operation A and initiates a manipulation instruction corresponding to operation A to instruct the virtual character 304 to perform operation A; where the execution prediction information 306 can be, but is not limited to, based on the running screen.
  • the prediction information obtained from the game reference data corresponding to 308-1, the running screen 308-1 can be, but is not limited to, the target virtual game in the first time.
  • the execution screen 308-2 may be, but is not limited to, the execution screen corresponding to the target virtual game in the second time unit.
  • the control instruction is an instruction initiated when the simulation object controls the virtual character to perform a candidate operation.
  • the way for the virtual character to participate in the target virtual game may include, but is not limited to, including the virtual character performing the target operation.
  • the virtual character's execution of the target operation may be, but is not limited to, in response to a control instruction initiated by the simulation object; or, in other words, the way the simulation object participates in the target virtual game may be, but is not limited to, the simulation object's initiation of a control instruction to control the virtual character to perform the target operation.
  • more diverse game information may be displayed, but is not limited to, such as participating in the target virtual game.
  • the basic information of at least one simulation object such as the name of the simulation object, the historical performance of the simulation object, etc.
  • the progress information of the target virtual game such as the virtual resources currently held by the virtual character, the props currently configured by the virtual character, the virtual character Current battle information, etc.
  • progress prediction information of the target virtual game such as prediction information of the battle results of the target virtual game, prediction information of the progress development of the target virtual game, etc.
  • the execution prediction information may be displayed in a manner related to, but is not limited to, information such as candidate operations and virtual characters.
  • the execution prediction information may be, but is not limited to, related to the candidate operation. It is displayed in the form of selection priority in each direction.
  • the execution prediction information 402 includes various directions in which the movement operation can be executed, and the selection priority in each direction can be, but is not limited to, reflected in the form of length. For example, there is a positive relationship between the length and the selection priority, that is, the longer the direction, the higher the selection priority in that direction, or it can be understood that the longest direction is the direction in which the movement operation is most likely to move.
  • the execution prediction information may be, but is not limited to, displayed in the form of the selection priority of each target object.
  • the execution prediction information 502 includes each target object that can be selected for execution of the attack operation.
  • the selection priority of each target object can be, but is not limited to, reflected in the form of a shadow.
  • the display area of the shadow is positively related to the selection priority, that is, the larger the display area of the shadow, the greater the selection priority of the target object.
  • target object B has the highest selection priority, or it can be understood that the target is targeted when performing an attack operation.
  • Object B has the highest probability of execution.
  • the running screen of the current time unit is detected to obtain the game reference data of the current time unit, and then the decision-making process of calculating and executing the prediction information based on the game reference data is carried out. Display, and then visually display the decision-making process of artificial intelligence participation in virtual games, improving the comprehensiveness of information display.
  • the running screen 604 corresponding to the target virtual game in the first time unit is displayed, as shown in (a) of Figure 6, where the target virtual game is A virtual game in which at least one simulation object participates.
  • the simulation object is a virtual object driven by artificial intelligence and used to simulate and control the virtual character 602 to participate in the target virtual game;
  • the game reference data 606 corresponding to the running screen 604 is obtained, where the game reference data 606 is the game feedback returned by the virtual character 602 when the virtual character 602 participated in the target virtual game in the first time unit.
  • Data based on the game reference data 606, display the candidate operations to be performed by the virtual character 602 in the second time unit (such as operation A, operation The execution prediction information 608 corresponding to operation B and operation C), wherein the execution prediction information 608 is used to provide an auxiliary reference related to the game reference data 606 for the control instruction to be initiated by the simulation object in the second time unit, and the control instruction is The simulation object controls the instruction initiated when the virtual character 602 performs the candidate operation, and the second time unit is after the first time unit;
  • the simulation object makes decisions based on the execution prediction information 608, such as initiating a control instruction corresponding to operation A in the second time unit to control the virtual character 602 to perform operation A (attack the enemy character), specifically as shown in Figure 6 ( c); and, but is not limited to, reusing the running screen corresponding to the target virtual game in the second time unit to obtain the latest game reference data, and obtain the next time unit in the second time unit based on the latest game reference data.
  • the execution prediction information of time units has similar principles and will not be elaborated redundantly here.
  • the running screen corresponding to the target virtual game in the first time unit can be displayed, wherein the first time unit can be the current time unit, and the target virtual game is at least A virtual game in which simulated objects participate.
  • the above-mentioned simulated objects are virtual objects driven by artificial intelligence and used to simulate and control virtual characters participating in the above-mentioned target virtual game.
  • the game reference data corresponding to the above-mentioned running screen is obtained, wherein the above-mentioned game reference data is the game data fed back by the above-mentioned virtual character when the above-mentioned first time unit participates in the above-mentioned target virtual game.
  • execution prediction information corresponding to the candidate operation to be executed is displayed, where the candidate operation to be executed is an operation to be executed by the virtual character in the second time unit, and the execution prediction information is used to Provide an auxiliary reference related to the above-mentioned game reference data for the control command to be initiated, which is to be initiated by the above-mentioned simulation object in the above-mentioned second time unit, and is used to control the above-mentioned virtual character to execute the above-mentioned candidate Instructions for operation, the above-mentioned second time unit is after the above-mentioned first time unit. That is to say, the execution prediction information can be used as the basis for deciding which candidate operation to execute.
  • the execution prediction information corresponding to the candidate operation to be executed is displayed, including:
  • First probability distribution information of at least two candidate operations to be performed is displayed, where the first probability distribution information is used to predict the probability of the virtual character performing each of the at least two candidate operations in the second time unit.
  • the first probability distribution information may be, but is not limited to, displayed in a prediction information list, where the prediction information list may be configured with, but is not limited to, various types of information associated with each virtual character. Probability distribution information for candidate operations.
  • the first number of candidate operations with greater probability are displayed first, For example, if the probability of candidate operation 1 is 70%, the probability of candidate operation 2 is 50%, and the probability of candidate operation 3 is 20%, then candidate operation 1 and candidate operation 2 are displayed first.
  • the running screen 702 and the prediction information list 704 corresponding to the target virtual game in the first time unit are displayed, and the virtual character (such as The first of at least two candidate operations to be performed by virtual character A, virtual character B, and virtual character C) Probability distribution information, wherein the first probability distribution information is used to predict the probability that the virtual character performs each of the at least two candidate operations in the second time unit; specifically, virtual character A, virtual character B, and virtual character are displayed
  • the probability of movement operations associated with C (such as movement operations in the first direction, movement operations in the second direction, etc.), taking virtual character A as an example, the probability of movement operations in the first direction is "44.7%", and the probability of movement operations in the second direction is "44.7%".
  • the probability of the move operation is "16.5%", etc.;
  • this embodiment can also be based on the scene shown in Figure 7, and as shown in Figure 8, display the skill release operations associated with virtual character A, virtual character B, and virtual character C (such as the release operation of the first skill, the release operation of the first skill, and the release operation of the first skill).
  • the probability of the release operation of the second skill direction, etc. taking virtual character A as an example, the probability of the release operation of the A1 skill is "54.7%", the probability of the release operation of the A2 skill is "16.5%”, and the probability of the release operation of the A3 skill The probability is "24.7%", the probability of A4 skill release operation is "12.57%", etc.;
  • this embodiment can also display the probability of the prop configuration operation associated with the virtual character (such as the configuration operation of prop 1, the configuration operation of prop 2, etc.), where the prop configuration operations may, but are not limited to, include replacement, disassembly, installation, and purchase. , sell, deposit into the first virtual container, take out from the second virtual container, etc.
  • the prop configuration operations may, but are not limited to, include replacement, disassembly, installation, and purchase. , sell, deposit into the first virtual container, take out from the second virtual container, etc.
  • first probability distribution information of at least two candidate operations to be performed is displayed, wherein the first probability distribution information is used to predict that the virtual character will perform each of the at least two candidate operations in the second time unit.
  • the probability of candidate operations is achieved, thereby achieving the purpose of intuitively displaying information using probability distributions, thereby achieving the technical effect of improving the intuitiveness of information display.
  • the execution prediction information corresponding to the candidate operation to be executed is displayed, including:
  • the second probability distribution information may be, but is not limited to, displayed in a prediction information list, where the prediction information list may be configured with, but is not limited to, various directions associated with each virtual character. Probability distribution information of the object.
  • the second number of pointing objects with higher probability are given priority, For example, if the probability of pointing to object 1 is 70%, the probability of pointing to object 2 is 50%, and the probability of pointing to object 3 is 20%, then pointing to object 1 and pointing to object 2 will be displayed first.
  • the running screen 702 and the prediction information list 704 corresponding to the target virtual game in the first time unit are displayed, and in the prediction information List 704 displays second probability distribution information for virtual characters (such as virtual character A, virtual character B, and virtual character C) to perform candidate operations on at least two pointed objects, where the second probability distribution information is used to predict whether the virtual character will The probability of performing a candidate operation on each of the at least two pointing objects in the second time unit; specifically, the probability of pointing objects associated with virtual character A, virtual character B, and virtual character C is displayed, such as Pointing to object B (Avatar B), pointing to object C (Avatar C), pointing to object A (Avatar A) associated with avatar B, pointing to object C (Avatar C), pointing to object A associated with avatar C ( Virtual character A), pointing object B (virtual character B), taking virtual character A as an example, the attack operation targets the The probability of executing the attack operation
  • the second probability distribution information of the virtual character performing candidate operations on at least two pointing objects is displayed, wherein the second probability distribution information is used to predict that the virtual character performs the candidate operations on at least two pointing objects in the second time unit.
  • the probability that each pointing object in the object performs a candidate operation is achieved, thereby achieving the purpose of intuitively displaying information using probability distribution, thereby achieving the technical effect of improving the intuitiveness of information display.
  • displaying the running screen corresponding to the target virtual game in the first time unit includes: displaying the running screen in the first interface area of the game viewing interface;
  • displaying execution prediction information corresponding to the candidate operation to be executed includes: displaying the execution prediction information in a second interface area in the game viewing interface.
  • the running screen is displayed in the first interface area of the game viewing interface; the execution prediction information is displayed in the second interface area of the game viewing interface.
  • the running screen is displayed in the first interface area of the game viewing interface 1002 (the middle area of the game viewing interface 1002 ), and the second interface area of the game viewing interface 1002 is displayed.
  • the execution prediction information (such as the target probability distribution of camp A, the movement probability distribution of camp A, the target probability distribution of camp B, the movement probability distribution of camp B, etc.) is displayed in the interface area; in addition, the viewing interface 1002 also displays Basic game battle data, winning rate prediction, camp A’s economic composition, camp A’s damage proportion, camp B’s economic composition, camp B’s damage proportion, minimap, etc.
  • the running screen is displayed in the first interface area of the viewing interface; the execution prediction information is displayed in the second interface area of the viewing interface, thereby achieving the goal of displaying more comprehensive information in the viewing interface.
  • the purpose is to achieve the technical effect of improving the comprehensiveness of information display.
  • displaying the running screen in the first interface area of the game viewing interface includes: displaying the running host of the target virtual game corresponding to the first time unit in the first sub-area of the first interface area.
  • the screen and the second sub-area in the first interface area display the running sub-screen corresponding to the target virtual game in the first time unit, wherein the main running screen is a real-time screen in the virtual scene of the target virtual game, and the running sub-screen It is a thumbnail screen of the virtual scene;
  • the execution prediction information can also be displayed in the running sub-screen; the execution prediction information can be displayed in the second interface area of the viewing interface, including: displaying it in the third sub-area of the second interface area Execute forecast information.
  • the running screen and the execution prediction information may be, but are not limited to, displayed in the same or different interface areas, or in other words, the running screen and execution prediction information may be displayed in the first interface area in the viewing interface.
  • the execution prediction information is displayed on the screen and in the second interface area in the viewing interface, but it is not limited to the running screen and the execution prediction information being intelligently displayed in different interface areas. They can also be displayed in the same interface area, but are not limited to .
  • the real-time position of each virtual character can be, but is not limited to, displayed on a small map, and represented by an arrow on the avatar of each virtual character.
  • the direction with the highest probability in the probability distribution of the virtual character's movement in addition, but not limited to, it can also support the simultaneous display of the first two directions with the highest probability to help watching users quickly understand the decision-making of the virtual character on the small map (running sub-screen) information, better understanding AI decision-making ideas.
  • the event can be, but is not limited to, judged as a concentrated fire, and the event can be displayed on the mini-map, so that Users can intuitively understand the intentions of AI.
  • the main screen 1104 of the target virtual game running in the first time unit is displayed in the viewing interface 1102, and the main screen 1104 of the target virtual game in the first time unit is displayed in the viewing interface 1102.
  • the running sub-screen 1106 corresponding to the first time unit wherein the main running screen 1104 is the real-time screen in the virtual scene of the target virtual game, the running sub-screen 1106 is the thumbnail screen of the virtual scene, and the thumbnail screen displays the virtual character. Character position identification; in addition, execution prediction information may also be displayed in, but is not limited to, the running sub-screen 1106 and the viewing interface 1102.
  • the main screen for running the target virtual game corresponding to the first time unit is displayed in the first sub-area of the first interface area, and the target is displayed in the second sub-area of the first interface area.
  • the running sub-screen corresponding to the first time unit of the virtual game wherein the main running screen is the real-time screen in the virtual scene of the target virtual game, and the running sub-screen is the thumbnail screen of the virtual scene; the execution prediction information, And the execution prediction information is displayed in the third sub-area of the second interface area, thereby achieving the purpose of efficiently displaying the information on the viewing interface, thus achieving the technical effect of improving the display efficiency of information.
  • the execution prediction information is displayed on the running sub-screen, including:
  • the movement direction indicator is displayed at the associated position of the character position indicator in the running sub-screen, where the movement direction indicator is used to provide a direction reference for the movement instruction to be initiated by the simulated object in the second time unit, and the movement instruction is used to instruct the virtual character to perform move.
  • displaying the movement direction identifier at the associated position of the character position identifier may, but is not limited to, be understood as displaying the execution prediction information in combination with the character position identifier on the running sub-screen. within the second sub-region.
  • the execution prediction information is displayed on the running sub-screen 1106, specifically the movement direction is displayed at the associated position of the character position identifier 1202. Identification 1204, wherein the movement direction identification 1204 is used to provide a direction reference for the movement instruction to be initiated by the simulated object in the second time unit, and the movement instruction is used to instruct the virtual character to move.
  • the movement direction identifier is displayed at the associated position of the character position identifier in the running sub-screen, thereby achieving the purpose of conveniently utilizing the information of the running sub-screen and displaying the execution prediction information more intuitively, thus achieving the goal of improving The technical effect of intuitive display of information.
  • the character position identifier of the virtual character is displayed on the thumbnail screen
  • the execution prediction information includes the operation trajectory identifier
  • the execution prediction information is displayed on the running sub-screen, including:
  • the operation trajectory identification associated with the target candidate operation is highlighted at the associated position of the target character position identification in the running sub-screen, where the operation trajectory identification is used to Provides a pointing reference for the manipulation instruction to be initiated by the simulated object in the second time unit.
  • the target candidate operation is the candidate operation that points to the same object.
  • the target character position identifier is the character position corresponding to the virtual character to be initiated in the second time unit. logo.
  • the association of the target character position identifier is The operation trajectory identifier associated with the target candidate operation is highlighted at the position. If the execution prediction information indicates that the number of virtual characters exceeding the preset threshold will all perform attack operations on the same virtual character, the associated position of the corresponding character position identifier will be highlighted. The operation trajectory identifier associated with this attack operation.
  • the operation trajectory identification associated with the target candidate operation is highlighted at the associated position of the target character position identification, thereby achieving The purpose of highlighting execution prediction information that meets specific conditions is achieved, thereby achieving the technical effect of improving the display efficiency of execution prediction information.
  • displaying execution prediction information corresponding to candidate operations to be executed includes at least one of the following:
  • the execution prediction information corresponding to the pointing operation to be executed is displayed, where the execution prediction information corresponding to the pointing operation to be executed is used to prepare the simulation object for the second time unit.
  • the initiated control command provides a pointing reference, and the pointing operation is used to determine the pointing object of the control command;
  • the candidate operations to be executed are at least two candidate operations to be executed, display the execution prediction information corresponding to the at least two candidate operations to be executed, wherein the execution prediction information corresponding to the at least two candidate operations to be executed is used to Provide a selection reference for at least two manipulation instructions to be initiated by the simulation object in the second time unit, and the manipulation instructions in the at least two manipulation instructions correspond one to one to the candidate operations in the at least two candidate operations;
  • the execution prediction information corresponding to the movement operation to be executed is displayed, where the execution prediction information corresponding to the movement operation to be executed is used to prepare the simulation object for the second time unit.
  • the initiated movement command provides a direction reference;
  • the execution prediction information corresponding to the attack operation to be executed is displayed, where the execution prediction information corresponding to the attack operation to be executed is used to prepare the simulation object for the second time unit.
  • the launched attack command provides a pointing reference;
  • the execution prediction information corresponding to the configuration operation to be executed is displayed, where the execution prediction information corresponding to the configuration operation to be executed is used to prepare the simulation object for the second time unit.
  • the initiated configuration instruction provides a pointing reference, and the configuration operation is used to determine the pointing prop of the configuration instruction.
  • the list of targets to be attacked and the corresponding attack probabilities can be calculated, but are not limited to, through the battle data of the current frame of the game.
  • the AI will attack Most likely High targets; in addition, in order to make the data intuitive and easy to understand, it can but is not limited to only showing the top two most probable targets in the target list, where at least two pointing objects are recorded in the target list.
  • the candidate operations to be executed and the corresponding execution probabilities can be calculated, but are not limited to, through the battle data of the current frame of the game.
  • the AI will execute The candidate operations with the highest probability; in addition, in order to make the data intuitive and easy to understand, only the top two candidates with the highest probability among the candidate operations can be, but are not limited to, displayed, in which the candidate operations to be performed include at least two candidate operations.
  • the upcoming direction and corresponding probability can be calculated through, but not limited to, the battle data of the current frame of the game.
  • the AI will move to the direction with the highest probability. direction; in addition, in order to make the data intuitive and easy to understand, it is possible but not limited to displaying only the top two with the highest probability in the direction list, in which the direction to be forwarded includes at least two moving directions.
  • the virtual props to be configured and their corresponding probabilities can be calculated, but are not limited to, through the battle data of the current frame of the game.
  • the AI will use the battle data of the current frame of the game to calculate the corresponding probability.
  • Configure the virtual props in addition, in order to make the data intuitive and easy to understand, you can, but are not limited to, only display the top two virtual props with the highest probability.
  • the virtual props to be configured include at least two pointing props.
  • the execution prediction information corresponding to the candidate operation to be executed is displayed, including:
  • the viewing user in order to improve the display accuracy of the execution prediction information, can adjust the screen perspective of the target virtual game by switching the character perspectives of different virtual characters, and can also adjust accordingly. Display of execution prediction information corresponding to different virtual characters.
  • the method also includes at least one of the following:
  • S1 display the basic game information of at least one simulation object, where the basic game information is the basic information of each simulation object in at least one simulation object;
  • S2 display the real-time game information corresponding to the target virtual game in the first time unit, where the real-time game information is the real-time information generated when the target virtual game is running in the first time unit;
  • S3 display the game history information corresponding to the target virtual game in the first time unit, where the game history information is the historical information produced by the target virtual game before the first time unit is run;
  • the display of the (event) viewing interface is used as an example; in one possible implementation, the viewing interface consists of a game screen and a data module. After the game starts, the viewing user can Switch the perspectives of different virtual characters by clicking on the logo corresponding to the virtual character.
  • the data module can be understood as the data displayed on the game viewing interface with reference to Figure 10, including, for example, basic game battle data, winning rate prediction, economic composition and damage proportion, Target probability distribution, movement probability distribution, mini map, etc.
  • the basic data can be presented with reference to, but is not limited to, data during real-person battles.
  • Corresponding basic data is also extracted for presentation during AI battles, including game data, team data and virtual data. Character basic data, etc.
  • the game data may include, but is not limited to, a battle screen (immediate game information), the battle duration (game history information), and may, but is not limited to, display multiple games in real time through the battle screen.
  • the battle status of an AI in the target virtual game may include, but is not limited to, a battle screen (immediate game information), the battle duration (game history information), and may, but is not limited to, display multiple games in real time through the battle screen.
  • the team data may include, but is not limited to, the name of the team to which the AI model belongs (basic game information), KDA (game history information), the number of tyrants defeated (game history information), economic composition (game history information), and damage proportion to virtual characters (game history information).
  • economic composition and damage proportion to virtual characters are used to help watching users understand the operation of both AI types during the battle. ideas, and then reflect the different priorities of users participating in the war when training AI.
  • the economic composition can be, but is not limited to, used to display the total economy of the camp (excluding natural growth economy).
  • the total economy can also be divided into defeating virtual characters, defeating NPC characters (such as virtual soldiers, virtual monsters, etc.) and NPC buildings (such as virtual defense towers, virtual crystal buildings, etc.) sources, and the proportion of each source to the total economy is displayed on the battle interface.
  • defeating virtual characters such as virtual soldiers, virtual monsters, etc.
  • NPC buildings such as virtual defense towers, virtual crystal buildings, etc.
  • the damage proportion of the virtual characters can also be displayed, such as showing the total damage caused to the enemy virtual characters, and the damage caused by each virtual character to the enemy virtual characters. The proportion of damage caused.
  • the data module may also include but is not limited to virtual character basic data, such as virtual character avatar, health value, summoner skills and status, further based on the scene shown in Figure 15, Continuing with the example shown in Figure 16, the virtual character basic data 1602 is displayed, including the avatar of the avatar, the name of the avatar, the KDA of the avatar, etc.; in addition, after clicking the avatar of the avatar, the game screen can be adjusted to the avatar, but is not limited to perspective.
  • virtual character basic data such as virtual character avatar, health value, summoner skills and status
  • the data module may also include but is not limited to winning rate prediction (game prediction information), such as predicting and displaying the winning rate based on the real-time battle data of both AI sides, further based on Figure 14
  • winning rate prediction game prediction information
  • the winning rate is predicted based on the real-time battle data of both AI sides, and the result information of the winning rate prediction is displayed 1702.
  • the winning rate of AI 1 is 69%
  • the winning rate of AI 2 is 31%.
  • the basic game information of at least one simulation object is displayed, where the basic game information is the basic information of each simulation object in the at least one simulation object; the target virtual game corresponding to the first time unit is displayed.
  • the game real-time information where the game real-time information is the real-time information generated when the target virtual game is running in the first time unit; the game history information corresponding to the target virtual game in the first time unit is displayed, where the game history information is It is the historical information produced before the target virtual game is run in the first time unit; displays the game prediction information corresponding to the target virtual game in the first time unit, where the game prediction information is the game of at least one simulation object participating in the target virtual game.
  • the prediction information of the game results is achieved, and the technical effect is achieved.
  • the game prediction information corresponding to the target virtual game in the first time unit is displayed, including:
  • S2 use the game screen to obtain local game status information and overall game status information, where the local game status information is used to represent the game status of each virtual character participating in the target virtual game in the target virtual game, and the overall game status information is used to represent the game status of each virtual character participating in the target virtual game.
  • the game status information is used to represent the game status of the target virtual game in the first unit of time;
  • a supervised learning model that takes the current game state (game screen) as the input and the evaluation function value as the output can be used, but is not limited to The game screen of the target virtual game during operation is processed to obtain the evaluation function value used to obtain game prediction information;
  • the game prediction model can be, but is not limited to, divided into two sub-structures as shown in Figure 18. Specifically, first the individual (Ind) part is input as the current state. The status of each individual (such as individual characteristics 1, individual characteristics 2, and individual characteristics 3 in the individual part 1802) is processed using the fully connected layer, and the output is each individual's contribution to the game situation (such as the individuals in the contribution set 1806 Contribution 1, individual contribution 2, individual contribution 3), the input of the global (Glo) part is the global state in the current state (such as the overall feature in the overall part 1804), which is processed using the fully connected layer, and the output is the global state's response to the game situation. contribution (such as the overall contribution in the contribution set 1806); finally, the above evaluation function value 1808 is predicted by integrating the outputs of the two substructures.
  • the individual (Ind) part is input as the current state.
  • the status of each individual (such as individual characteristics 1, individual characteristics 2, and individual characteristics 3 in the individual part 1802) is processed using the fully connected layer, and
  • game prediction information is obtained and displayed based on the evaluation function value, including:
  • the target virtual game is a virtual game in which simulation objects from at least two opposing camps participate, use the evaluation function value to determine the predicted remaining time for each of the at least two opposing camps to participate in the target virtual game;
  • the evaluation function value can be, but is not limited to, used to evaluate the relative advantages between game teams. Assuming that the target virtual game is a game between team A and team B, then the evaluation The function value can, but is not limited to, be used to reflect the advantage of team A over team B, or the advantage of team B over team A. For details, please refer to the following formula (1) and formula (2):
  • DE represents the discount evaluation value
  • its absolute value is inversely proportional to the remaining time of the game, that is, the greater the advantage, the greater the possibility of winning, which will lead to a shorter time to end the game
  • R represents the reward (or can be understood as the game Report the results, such as Victory is recorded as 1, failure is recorded as -1)
  • t represents the remaining time of the game
  • r represents the importance difference between future rewards and current rewards.
  • obtaining the game reference data corresponding to the running screen includes: based on the running screen, obtaining the image features of the running screen through the first network structure in the image recognition model, where the game reference data includes the image Features, the image recognition model is a neural network model trained using sample data and used to recognize images.
  • the running screen can be input to the first network structure in the image recognition model, and the first network structure is used to extract image features, thereby obtaining the image features of the running screen.
  • the first network structure may, but is not limited to, include an input layer, a convolution layer, a pooling layer, a fully connected layer, etc., wherein the convolution layer may, but is not limited to, consist of several convolution units, each of which The parameters of the convolution unit are optimized through the back propagation algorithm.
  • the purpose of the convolution operation is to extract different features of the input.
  • the first convolution layer may only be able to extract some low-level features such as edges, lines and corners.
  • the pooling layer can be, but is not limited to, after the convolutional layer, and is also composed of multiple feature surfaces, each of which corresponds to A feature surface in the upper layer will not change the number of feature surfaces;
  • the fully connected layer can, but is not limited to, each node be connected to all nodes in the previous layer, used to extract the features previously To sum up, due to its fully connected characteristics, the fully connected layer generally has the most parameters.
  • the execution prediction information includes recognition results.
  • the image features of the running screen can be input to the second network structure of the image recognition model.
  • the second network structure is used to classify based on the image features extracted by the first network structure, thereby obtaining the recognition result.
  • the second network structure may, but is not limited to, include an output layer, wherein the activation function used by the output layer may, but is not limited to, include a Sigmoid function, a tanh function, etc., and the activation function may, but is not limited to, consist of a linear unit and a non-linear unit based on the basic structure of a single neuron.
  • the linear unit consists of two parts,
  • the image features of the running screen are obtained through the first network structure in the image recognition model, including:
  • S1 use the convolution layer in the first network structure to perform image recognition on the running picture, and obtain at least two picture features corresponding to the running picture;
  • S2 use the fully connected layer in the first network structure to perform feature concatenation on at least two picture features to obtain image features.
  • the network uses convolution to perform feature encoding on image features, vector features and game status information, and then uses a fully connected layer (Full Connection, FC) to concatenate all feature codes to obtain the status coding.
  • FC Full Connection
  • the recognition result is obtained through the second network structure in the image recognition model, including:
  • the second network structure in the embodiment of the present application includes an attention mechanism layer and an output layer.
  • Image features can be mapped to the attention mechanism layer in the second network structure to obtain a mapping result; the mapping result is input to The output layer in the second network structure is used to obtain the recognition result.
  • the action control dependencies in the target virtual game are modeled through the actor-critic network; first, the network uses volumes The product performs feature coding on image features, vector features and game status information, and then uses the fully connected layer (FC) to concatenate all feature codes to obtain the status code. Then, the status code is mapped from the LSTM recurrent unit to the hLSTM (attention mechanism layer). hLSTM is input to the FC layer to predict the final action output, including movement operations, attack operations, skill release operations, pointing objects, etc.; in addition, in order to assist AI in making more correct choices in the battle of the target virtual game, the network structure is designed The target attention mechanism is introduced.
  • FC fully connected layer
  • This mechanism uses the FC output of hLSTM as the query and the stack encoded by all units as the key to calculate the target attention, that is, the AI's attention to each target in the current game state. By visualizing AI's attention to each goal, you can more intuitively understand AI's decision-making in the current state.
  • actor-critic is an algorithm of deep reinforcement learning. This algorithm defines two networks, namely the policy network (Actor) and the evaluation network (Critic), forming an actor-critic network. Among them, Actor is mainly used to train strategies and find optimal actions, while Critic is used to score actions to guide the best actions.
  • LSTM is a long short-term memory network (Long Short-Term Memory)
  • hLSTM is a heterogeneous long short-term memory network (heterogeneous Long Short-Term Memory).
  • this embodiment assumes that the above information processing method is applied in a battle scenario of a multiplayer online battle arena (MOBA) game involving AI, The overall process is shown in Figure 20:
  • the front-end loads the game screen and renders the corresponding data in the corresponding module
  • data extraction in real-time battles of AI models includes extracting economic, damage and other data during AI battles, performing visual display, and predicting the winning rate based on relevant data; extracting decision-making data (move and target) during AI battles, and then structuring Presented to watching users in a clear and easy-to-understand manner.
  • the winning rate prediction can assist watching users to understand the game screen. Through the changes in winning rate, even watching users who are not familiar with the game can roughly guess the victory goal of the game. At the same time, dynamic changes The expected winning rate can also increase the dramatic tension of the game.
  • the current state represents the game situation of a specific time slice, including individual states and global states.
  • Individual status includes the level, economy and existence of the team's virtual character. Active status, etc.
  • the global status includes troop lines, defense tower status, etc. This information used to represent the current game state can be found in game records, such as replay files.
  • the content of the data is not only limited to basic data, target probability distribution, movement probability distribution and mini-map, but also can add more dimensions of data display according to the game type, and the data
  • the content is displayed in a visual form, such as line charts, heat maps, etc.
  • the terminal devices presented are not limited to PCs, but can also be mobile devices, large-screen devices, etc.
  • the method of operation interaction is not limited to mouse and keyboard, but can also be gestures, voice control, etc.
  • the AI game battle is a competition of reinforcement learning models. It is different from the real-person game battle. It focuses more on the training ideas and algorithm optimization of the AI model and does not involve human factors such as emotions, moods, and reactions.
  • This application By presenting the decision-making process and data of the AI model, and displaying it in real time to users watching the game in conjunction with the game screen, we innovatively propose a unique real-time presentation method for AI battles in MOBA games, making AI interpretable and effective. Improved the viewing experience of AI game battles.
  • an information processing device for implementing the above information processing method is also provided. As shown in Figure 21, the device includes:
  • the first display unit 2102 is used to display the running screen corresponding to the target virtual game in the first time unit, wherein the target virtual game is a virtual game in which at least one simulation object participates, and the simulation object is driven by artificial intelligence and used to simulate and control the virtual game.
  • the target virtual game is a virtual game in which at least one simulation object participates, and the simulation object is driven by artificial intelligence and used to simulate and control the virtual game.
  • the acquisition unit 2104 is used to obtain the game reference data corresponding to the running screen, where the game reference data is the game data fed back when the virtual character participates in the target virtual game in the first time unit;
  • the second display unit 2106 is used to display the execution prediction information corresponding to the candidate operation to be executed based on the game reference data, wherein the candidate operation to be executed is an operation to be executed by the virtual character in the second time unit.
  • Execution prediction The information is used to provide auxiliary references related to the game reference data for the control instructions to be initiated.
  • the control instructions to be initiated are instructions to be initiated by the simulated object in the second time unit and used to control the virtual character to perform candidate operations.
  • the second time unit is after the first time unit.
  • the running screen corresponding to the target virtual game in the first time unit can be displayed, wherein the first time unit can be the current time unit, and the target virtual game is at least A virtual game in which simulated objects participate.
  • the above-mentioned simulated objects are virtual objects driven by artificial intelligence and used to simulate and control virtual characters participating in the above-mentioned target virtual game.
  • execution prediction information corresponding to the candidate operation to be executed is displayed, where the candidate operation to be executed is an operation to be executed by the virtual character in the second time unit, and the execution prediction information is used to Provide an auxiliary reference related to the above-mentioned game reference data for the control command to be initiated, which is to be initiated by the above-mentioned simulation object in the above-mentioned second time unit, and is used to control the above-mentioned virtual character to execute the above-mentioned candidate Instructions for operation, the above-mentioned second time unit is after the above-mentioned first time unit. That is to say, the execution prediction information can be used as the basis for deciding which candidate operation to perform.
  • the second display unit 2106 includes:
  • a first display module configured to display first probability distribution information of at least two candidate operations to be performed, wherein the first probability distribution information is used to predict that the virtual character will perform each of the at least two candidate operations in a second time unit. The probability.
  • the second display unit 2106 includes:
  • the second display module is used to display the second probability distribution information of the virtual character performing candidate operations on at least two pointed objects, wherein the second probability distribution information is used to predict that the virtual character performs the candidate operations on the at least two pointing objects in the second time unit. The probability that each pointed object performs a candidate operation.
  • the first display unit 2102 includes: a third display module configured to display the running screen in the first interface area of the game viewing interface;
  • the second display unit 2106 includes: a fourth display module, configured to display execution prediction information in the second interface area of the game viewing interface.
  • the third display module includes: a first display sub-module, configured to display the main running screen of the target virtual game corresponding to the first time unit in the first sub-area of the first interface area. , and display the running sub-screen corresponding to the target virtual game in the first time unit in the second sub-area in the first interface area, wherein the running main screen is a real-time screen in the virtual scene of the target virtual game, and the running sub-screen is Thumbnail screen of virtual scene;
  • the fourth display module includes: a second display sub-module, configured to display execution prediction information in the third sub-area in the second interface area;
  • the second display sub-module is also used to display execution prediction information on the running sub-screen.
  • a character position identifier of the virtual character is displayed on the thumbnail screen
  • the execution prediction information includes a movement direction identifier
  • the second display submodule includes:
  • the first display subunit is used to display the movement direction identification at the associated position of the character position identification in the running sub-screen, where the movement direction identification is used to provide a direction reference for the movement instruction to be initiated by the simulated object in the second time unit, moving Instructions are used to instruct the virtual character to move.
  • the character position identifier of the virtual character is displayed on the thumbnail screen, and the execution prediction information includes the operation trajectory identifier.
  • the device includes:
  • the second display subunit is used to highlight the operation trajectories associated with the target candidate operations at the associated position of the target character position identification in the running sub-screen when the number of target candidate operations indicated by the operation trajectory identification reaches a preset threshold.
  • Identification where the operation track identification is used to provide a pointing reference for the manipulation instruction to be initiated by the simulated object in the second time unit, the target candidate operation is a candidate operation that points to the same object, and the target character position identification is to be initiated in the second time unit.
  • the character position identifier corresponding to the virtual character of the candidate operation.
  • the second display unit 2106 includes at least one of the following:
  • the fifth display module is used to display the execution prediction information corresponding to the pointing operation to be executed if the candidate operation to be executed is a pointing operation to be executed, wherein the execution prediction information corresponding to the pointing operation to be executed is used to provide the simulation object with The control command to be initiated in the second time unit provides a pointing reference, and the pointing operation is used to determine the pointing object of the control command;
  • the sixth display module is configured to display execution prediction information corresponding to the at least two candidate operations to be executed if the candidate operations to be executed are at least two candidate operations to be executed, wherein the at least two candidate operations to be executed correspond to
  • the execution prediction information is used to provide a selection reference for at least two manipulation instructions to be initiated by the simulation object in the second time unit.
  • the manipulation instructions in the at least two manipulation instructions and the candidate operations in the at least two candidate operations to be executed are one by one. correspond;
  • the seventh display module is used to display the execution prediction information corresponding to the movement operation to be performed if the candidate operation to be performed is a movement operation to be performed, wherein the execution prediction information corresponding to the movement operation to be performed is used to provide the simulation object with the The movement command to be initiated in the second time unit provides a direction reference;
  • the eighth display module is used to display the execution prediction information corresponding to the attack operation to be executed if the candidate operation to be executed is an attack operation to be executed, wherein the execution prediction information corresponding to the attack operation to be executed is used to provide the simulation object with the The attack command to be launched by the second time unit provides a pointing reference;
  • the ninth display module is used to display the execution prediction information corresponding to the configuration operation to be executed if the candidate operation to be executed is a configuration operation to be executed, wherein the execution prediction information corresponding to the configuration operation to be executed is used to provide the simulation object with a
  • the configuration instruction to be initiated in the second time unit provides a pointing reference, and the configuration operation is used to determine the pointing prop of the configuration instruction.
  • the second display unit 2106 includes:
  • a tenth display module configured to display execution prediction information corresponding to the first virtual character when the screen perspective of the target virtual game is the character perspective of the first virtual character;
  • the eleventh display module is configured to respond to an instruction to switch the screen perspective of the target virtual game, switch the screen perspective of the target virtual game to the character perspective of the second virtual character, and display execution prediction information corresponding to the second virtual character.
  • the device further includes at least one of the following:
  • the third display unit is configured to display the basic game information of at least one simulation object in the process of displaying the running screen corresponding to the target virtual game in the first time unit, wherein the basic game information is each of the at least one simulation object.
  • the fourth display unit is used to display the real-time game information of the target virtual game in the first time unit during the process of displaying the running screen corresponding to the target virtual game in the first time unit, wherein the real-time game information is the target virtual game.
  • the fifth display unit is used to display game history information corresponding to the target virtual game in the first time unit during the process of displaying the running screen corresponding to the target virtual game in the first time unit, wherein the game history information is the target virtual game. Historical information produced by the game before the first time unit was run;
  • the sixth display unit is configured to display game prediction information corresponding to the target virtual game in the first time unit during the process of displaying the running screen corresponding to the target virtual game in the first time unit, wherein the game prediction information is at least one Prediction information of the outcome of the game in which the simulated object participates in the target virtual game.
  • the sixth display unit includes:
  • the first acquisition module is used to acquire the game screen of the target virtual game during operation, where the game screen includes the running screen;
  • the second acquisition module is used to obtain local game status information and overall game status information using the game screen, where the local game status information is used to represent the relationship of each virtual character participating in the target virtual game in the target virtual game.
  • Game status the overall game status information is used to represent the game status of the target virtual game in the first unit of time;
  • the first input module is used to obtain the first recognition result through the game prediction model based on the local game state information, and to obtain the second recognition result through the game prediction model based on the overall game state information, where the first recognition result It is used to represent the contribution of each virtual character participating in the target virtual game to the game result, and the second identification result is used to represent the contribution of the game state of the target virtual game in the first time unit to the game result;
  • a fitting module used to fit the first recognition result and the second recognition result to obtain an evaluation function value, wherein the evaluation function value is used to evaluate the performance of at least one simulated object in the target virtual game from the overall and local performance of the object. Bureau progress;
  • the twelfth display module is used to obtain and display game prediction information based on the evaluation function value.
  • the twelfth display module includes:
  • the determination submodule is used to determine the predicted remaining time for each of the at least two opposing camps to participate in the target virtual game by using the evaluation function value if the target virtual game is a virtual game in which simulation objects from at least two opposing camps participate;
  • the acquisition submodule is used to obtain the predicted winning rate of each opposing camp for the target virtual game based on the predicted remaining time, where the predicted winning rate is inversely proportional to the predicted remaining time;
  • the third display submodule is used to display the predicted winning rate as game prediction information.
  • the acquisition unit 2104 includes: a second input module, configured to obtain the image features of the running screen through the first network structure in the image recognition model based on the running screen, where the game reference data includes Image features, the image recognition model is a neural network model trained using sample data and used to recognize images, and the first network structure is used to extract image features;
  • the second display unit 2106 includes: a third input module, configured to obtain the recognition result through the second network structure in the image recognition model based on the image features of the running screen, where the execution prediction information includes the recognition result.
  • the second input module includes:
  • the recognition submodule is used to use the convolution layer in the first network structure to perform image recognition on the running picture and obtain at least two picture features corresponding to the running picture;
  • the concatenation submodule is used to use the fully connected layer in the first network structure to perform feature concatenation on at least two picture features to obtain image features.
  • concatenating submodules includes:
  • the mapping subunit is used to map image features to the attention mechanism layer in the second network structure to obtain the mapping result
  • the input subunit is used to obtain the recognition result through the output layer in the second network structure based on the mapping result.
  • an electronic device for implementing the above information processing method includes a memory 2202 and a processor 2204.
  • the memory 2202 stores a computer Program
  • the processor 2204 is configured to execute the steps in any of the above method embodiments through the computer program.
  • the above-mentioned electronic device may be located in at least one network device among multiple network devices of the computer network.
  • the above-mentioned processor can be configured to perform the following steps through a computer program:
  • S1 display the running screen corresponding to the target virtual game in the first time unit, wherein the target virtual game is a virtual game in which at least one simulation object participates, and the simulation object is an artificial intelligence-driven device used to simulate and control virtual characters participating in the target virtual game. virtual object;
  • the control instructions provide auxiliary references related to the game reference data.
  • the control instructions to be initiated are instructions to be initiated by the simulated object in the second time unit and are used to control the virtual character to perform candidate operations.
  • the second time unit is After the first time unit.
  • the structure shown in Figure 22 is only illustrative, and the electronic device can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a Mobile Internet Devices (MID), PAD and other terminal equipment.
  • Figure 22 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or fewer components (such as network interfaces, etc.) than shown in FIG. 22, or have a different configuration than shown in FIG. 22.
  • the memory 2202 can be used to store software programs and modules, such as program instructions/modules corresponding to the information processing method and device in the embodiment of the present application.
  • the processor 2204 executes various software programs and modules by running the software programs and modules stored in the memory 2202. Function application and data processing, that is, realizing the above information processing method.
  • Memory 2202 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 2202 may further include memory located remotely relative to the processor 2204, and these remote memories may be connected to the terminal through a network.
  • the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the memory 2202 may be specifically, but is not limited to, used to store information such as running pictures, game reference data, and execution prediction information.
  • the memory 2202 may include, but is not limited to, the first display unit 2102 , the acquisition unit 2104 and the second display unit 2106 in the information processing device.
  • it may also include but is not limited to other module units in the above information processing device, which will not be described again in this example.
  • the above-mentioned transmission device 2206 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include wired networks and wireless networks.
  • the transmission device 2206 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through network cables to communicate with the Internet or a local area network.
  • the transmission device 2206 is a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • the above-mentioned electronic device also includes: a display 2208 for displaying the above-mentioned running screen, game reference data, execution prediction information and other information; and a connection bus 2210 for connecting various module components in the above-mentioned electronic device.
  • the above-mentioned terminal device or server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be composed of multiple nodes communicating through a network.
  • a distributed system formed by formal connections.
  • nodes can form a peer-to-peer (Peer To Peer, referred to as P2P) network, and any form of computing equipment, such as servers, terminals and other electronic devices, can become a node in the blockchain system by joining the peer-to-peer network.
  • P2P peer To Peer
  • a computer program product includes a computer program containing program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • various functions provided by the embodiments of the present application are executed.
  • the computer system includes a central processing unit (Central Processing Unit, CPU), which can be loaded into a random access memory (Random Access Memory, RAM) according to a program stored in a read-only memory (Read-Only Memory, ROM) or from a storage part. program to perform various appropriate actions and processes. In random access memory, various programs and data required for system operation are also stored.
  • the central processing unit, the read-only memory and the random access memory are connected to each other through a bus.
  • the input/output interface I/O interface
  • the following components are connected to the input/output interface: the input part including keyboard, mouse, etc.; including the output part such as cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (LCD), etc., and speakers, etc.; including hard disk The storage part, etc.; and the communication part including network interface cards such as LAN cards, modems, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • Drivers are also connected to input/output interfaces as required.
  • Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed so that a computer program read therefrom is installed into the storage section as needed.
  • the processes described in the respective method flow charts may be implemented as computer software programs.
  • embodiments of the present application include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • various functions defined in the system of the present application are executed.
  • a computer-readable storage medium is provided.
  • a processor of a computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, causing the computer device to execute the above various tasks. Choose the method provided in the implementation.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium can include: flash disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the integrated units in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they can be stored in the above computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, It includes several instructions to cause one or more computer devices (which can be personal computers, servers or network devices, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种信息处理方法,包括:显示目标虚拟游戏在第一时间单位对应的运行画面,其中,目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与目标虚拟游戏的虚拟对象;获取运行画面对应的对局参考数据,其中,对局参考数据为虚拟角色在第一时间单位参与目标虚拟游戏时反馈出的对局数据;基于对局参考数据,显示虚拟角色在第二时间单位待执行的候选操作对应的执行预测信息。还提供了一种信息处理装置、计算机可读的存储介质、计算机程序产品及电子设备。解决了信息的显示不够全面的技术问题。

Description

信息处理方法、装置和存储介质及电子设备
本申请要求于2022年6月23日提交中国专利局、申请号202210719475.0、申请名称为“信息显示方法、装置和存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及信息处理技术。
背景技术
随着人工智能的迅猛发展,将人工智能应用在虚拟游戏的场景中也成了一种趋势,在这种趋势下,可以在虚拟游戏运行过程中通过人工智能进行决策。然而,在人工智能参与的虚拟游戏运行过程中,存在信息的显示不够全面的问题,进而影响虚拟游戏的观看体验和人工智能的可解释性。
发明内容
本申请实施例提供了一种信息处理方法、装置和存储介质及电子设备,以至少解决信息的显示不够全面的技术问题。
根据本申请实施例的一个方面,提供了一种信息处理方法,该方法由电子设备执行,包括:显示目标虚拟游戏在第一时间单位对应的运行画面,其中,上述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,上述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与上述目标虚拟游戏的虚拟对象;获取上述运行画面对应的对局参考数据,其中,上述对局参考数据为上述虚拟角色在上述第一时间单位参与上述目标虚拟游戏时反馈出的对局数据;基于上述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被上述虚拟角色在第二时间单位执行的操作,上述执行预测信息用于为待发起的操控指令提供与上述对局参考数据相关的辅助参考,上述待发起的操控指令为有待于被上述模拟对象在上述第二时间单位发起的,且用于操控上述虚拟角色执行上述候选操作的指令,上述第二时间单位在上述第一时间单位之后。
根据本申请实施例的另一方面,还提供了一种信息处理装置,上述装置部署在电子设备上,包括:第一显示单元,用于显示目标虚拟游戏在第一时间单位对应的运行画面,其中,上述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,上述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与上述目标虚拟游戏的虚拟对象;获取单元,用于获取上述运行画面对应的对局参考数据,其中,上述对局参考数据为上述虚拟角色在上述第一时间单位参与上述目标虚拟游戏时反馈出的对局数据;第二显示单元,用于基于上述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被上述虚拟角色在第二时间单位执行的操作,上述执行预测信息用于为待发起的操控指令提供与上述对局参考数据相关的辅助参考,上述待发起的操控指令为有待于被上述模拟对象在上述第二时间单位发起的,且用于操控上述虚拟角色执行上述候选操作的指令,上述第二时间单位在上述第一时间单位之后。
根据本申请实施例的又一个方面,提供一种计算机可读存储介质,计算机可读的存储介质包括存储的计算机程序,其中,该计算机程序可被电子设备运行时执行如以上信息处理方法。
根据本申请实施例的又一个方面,提供一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该计算机设备执行如以上信息处理方法。
根据本申请实施例的又一方面,还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的信息处理方法。
在本申请实施例中,在目标虚拟游戏的运行过程中,可以显示目标虚拟游戏在第一时间单位对应的运行画面,其中,第一时间单位可以是当前时间单位,上述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,上述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与上述目标虚拟游戏的虚拟对象。通过对上述运行画面的检测,以获取上述运行画面对应的对局参考数据,其中,上述对局参考数据为上述虚拟角色在上述第一时间单位参与上述目标虚拟游戏时反馈出的对局数据。接着,基于上述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被上述虚拟角色在第二时间单位执行的操作,上述执行预测信息用于为待发起的操控指令提供与上述对局参考数据相关的辅助参考,上述待发起的操控指令为有待于被上述模拟对象在上述第二时间单位发起的,且用于操控上述虚拟角色执行上述候选操作的指令,上述第二时间单位在上述第一时间单位之后。也就是说,执行预测信息可以作为决定执行哪个候选操作的依据,通过对执行预测信息的显示,便于用户理解通过人工智能决策出对应候选操作的原因,帮助观众快速了解人工智能的决策思路,达到了将人工智能参与虚拟游戏的决策过程进行直观显示的目的,从而实现了提高信息的显示全面度的技术效果,进而解决了信息的显示不够全面的技术问题。相应的,提高了虚拟游戏的观看体验和人工智能的可解释性。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种信息处理方法的应用环境的示意图;
图2是根据本申请实施例的一种信息处理方法的流程的示意图;
图3是根据本申请实施例的一种信息处理方法的示意图;
图4是根据本申请实施例的另一种信息处理方法的示意图;
图5是根据本申请实施例的另一种信息处理方法的示意图;
图6是根据本申请实施例的另一种信息处理方法的示意图;
图7是根据本申请实施例的另一种信息处理方法的示意图;
图8是根据本申请实施例的另一种信息处理方法的示意图;
图9是根据本申请实施例的另一种信息处理方法的示意图;
图10是根据本申请实施例的另一种信息处理方法的示意图;
图11是根据本申请实施例的另一种信息处理方法的示意图;
图12是根据本申请实施例的另一种信息处理方法的示意图;
图13是根据本申请实施例的另一种信息处理方法的示意图;
图14是根据本申请实施例的另一种信息处理方法的示意图;
图15是根据本申请实施例的另一种信息处理方法的示意图;
图16是根据本申请实施例的另一种信息处理方法的示意图;
图17是根据本申请实施例的另一种信息处理方法的示意图;
图18是根据本申请实施例的另一种信息处理方法的示意图;
图19是根据本申请实施例的另一种信息处理方法的示意图;
图20是根据本申请实施例的另一种信息处理方法的示意图;
图21是根据本申请实施例的一种信息处理装置的示意图;
图22是根据本申请实施例的一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例将人工智能(Artificial Intelligence,AI)应用在虚拟游戏的场景,在虚拟游戏运行过程中利用人工智能进行决策,决策出模拟对象在下一时间单位将要操控虚拟角色执行的操作。
本申请实施例提供的方案涉及人工智能的计算机视觉技术、机器学习等技术,具体通过如下实施例进行说明:
根据本申请实施例的一个方面,提供了一种信息处理方法,在一种可能的实现方式中,上述信息处理方法可以但不限于应用于如图1所示的环境中。其中,可以但不限于包括用户设备102以及服务器112,该用户设备102上可以但不限于包括显示器108、处理器106及存储器1004,该服务器112包括数据库114以及处理引擎116。
具体过程可如下步骤:
S102,用户设备102获取目标虚拟游戏在第一时间单位对应的运行画面1002;
S104-S106,通过网络110将运行画面1002对应的画面数据发送至服务器112;
S108-S1110,服务器112从数据库114中获取运行画面1002对应的对局参考数据;再者,服务器112通过处理引擎116基于对局参考数据获取待执行的候选操作对应的执行预测信息;
S112-S114,通过网络110将执行预测信息发送至用户设备102,用户设备102通过处理器106将执行预测信息处理在显示器108,并将上述执行预测信息存储在存储器104。
除图1示出的示例之外,上述步骤可以由服务器辅助完成,即由服务器执行对局参考数据的获取、执行预测信息的获取等步骤,从而减轻服务器的处理压力。该用户设备102包括但不限于手持设备(如手机)、笔记本电脑、台式电脑、车载设备等,本申请并不限制用户设备102的具体实现方式。在一种可能的实现方式中,如图2所示,信息处理方法包括:
S202,显示目标虚拟游戏在第一时间单位对应的运行画面,其中,目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与目标虚拟游戏的虚拟对象;
S204,获取运行画面对应的对局参考数据,其中,对局参考数据为虚拟角色在第一时间单位参与目标虚拟游戏时反馈出的对局数据;
S206,基于对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被虚拟角色在第二时间单位执行的操作,执行预测信息用于为待发起的操控指令提供与对局参考数据相关的辅助参考,待发起的操控指令为有待于被模拟对象在第二时间单位发起的,且用于操控虚拟角色执行候选操作的指令,第二时间单位在第一时间单位之后。
在一种可能的实现方式中,在本实施例中,上述信息处理方法可以但不限于应用在人工智能参与的虚拟游戏场景中,如AI对战AI的虚拟游戏(目标虚拟游戏)、AI对战真人的虚拟游戏(目标虚拟游戏)等;进一步以AI对战真人的虚拟游戏为例说明,人机对战过程中,除了显示AI在对战时的决策数据外,还可以清晰易懂的方式显示一些影响决策的关键数据,如AI的神经网络的运行过程和回报等,可以更好地帮助参与对战的用户或观战的用户充分学习到AI的决策方式。
再者,以AI对战AI的虚拟游戏为例说明,假设虚拟游戏被分为两个对立阵营,则每个阵营的AI都将利用计算机视觉、机器学习等技术自主执行虚拟游戏中的游戏任务,争以取得虚拟游戏的最终胜利;而AI在发起操作指令之前,需利用计算机视觉采集虚拟游戏中的游戏状态等信息,并利用机器学习对执行何种操作指令进行决策,进一步将上述决策的过程信息用通俗易懂的方式呈现在观战界面,结合游戏对战画面帮助观战用户充分了解到AI的决策方式,增加AI对战的可观赏性,并使AI具有可解释性。
在一种可能的实现方式中,在本实施例中,时间单位可以为预设时长范围的时间段,目标虚拟游戏在该时间段内,可以但不限于包括至少一帧的运行画面。本申请实施例对预设时长范围不做限定,预设时长范围例如可以是1秒、1分钟、1小时、5秒、10秒、2分钟等等,可以根据实际需求进行设置。可以理解的是,预设时长范围越小,时间单位所表示的时间段越短,越接近一个时刻。在目标虚拟游戏中,为了尽可能实时对每一帧运行画 面采用本申请实施例提供的方法进行信息处理,时间单位可以是包括一帧运行画面的时间段。进一步假设目标虚拟游戏在一个时间单位内包括一帧运行画面,则第一时间单位对应的运行画面可以但不限于理解为目标虚拟游戏的当前帧运行画面,第二时间单位对应的运行画面可以但不限于理解为目标虚拟游戏的下一帧运行画面。
在一种可能的实现方式中,在本实施例中,运行画面可以但不限于理解为目标虚拟游戏的虚拟场景中的游戏画面。此外,为提高对局参考数据的获取效率,可以但不限于先获取目标虚拟游戏的虚拟场景中的全部游戏画面,再从上述全部游戏画面中筛选出与模拟对象操控虚拟角色关联的部分游戏画面,并将该部分游戏画面确定为上述运行画面,如此再对已筛选过的游戏画面进行高效的图像识别,节省了获取对局参考数据的时长,提高了对局参考数据的获取效率。
在一种可能的实现方式中,在本实施例中,获取运行画面对应的对局参考数据的过程,可以但不限于利用计算机视觉对运行画面进行识别、采集和测量等机器视觉,并进一步做图形处理,可以但不限于涉及图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术。
在一种可能的实现方式中,在本实施例中,模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与目标虚拟游戏的虚拟对象,或可理解为利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识指示参与目标虚拟游戏的虚拟角色执行最理想操作的虚拟对象。
在一种可能的实现方式中,在本实施例中,对局参考数据为虚拟角色在第一时间单位参与目标虚拟游戏时反馈出的对局数据,如游戏状态数据、游戏资源数据等。其中,游戏状态数据可以但不限用于表示参与目标虚拟游戏的虚拟角色的个体状态、和/或参与目标虚拟游戏的多个虚拟角色的局部状态、和/或表示参与目标虚拟游戏的各个阵营的整体状态;游戏资源数据可以但不限用于表示目标虚拟游戏的虚拟资源的被持有状态、未持有状态、分布状态等,如已被虚拟角色获得的虚拟资源所处的状态(虚拟资源的被持有状态)、未被虚拟角色获得的虚拟资源所处的状态(未持有状态)、虚拟资源在目标虚拟游戏的虚拟场景中的分布情况(分布状态)等。
在一种可能的实现方式中,在本实施例中,虚拟角色在第二时间单位待执行的候选操作可以但不限于理解为虚拟角色在当前时间单位(第一时间单位)还未执行、但有可能在下一时间单位(第二时间单位)执行的待选操作,如图3所中的(a)所示,模拟对象302操控的虚拟角色304有可能在下一时间单位(第二时间单位)待执行的候选操作包括操作A、操作B以及操作C,进而在获取到执行预测信息306的情况下,模拟对象302将在操作A、操作B以及操作C中决策出下一时间单位(第二时间单位)执行的操作。如图3中的(b)所示,模拟对象302决策操作A,并发起操作A对应的操控指令,以指示虚拟角色304执行操作A;其中,执行预测信息306可以但不限于为基于运行画面308-1对应的对局参考数据获取到的预测信息,运行画面308-1可以但不限于为目标虚拟游戏在第一时间单 位对应的运行画面,运行画面308-2可以但不限于为目标虚拟游戏在第二时间单位对应的运行画面。
在一种可能的实现方式中,在本实施例中,操控指令为模拟对象操控虚拟角色执行候选操作时所发起的指令,虚拟角色参与目标虚拟游戏的方式可以但不限于包括虚拟角色执行目标操作,而虚拟角色执行目标操作可以但不限于是响应于模拟对象发起的操控指令;或者说,模拟对象参与目标虚拟游戏的方式可以但不限于模拟对象发起操控指令以操控虚拟角色执行目标操作。
在一种可能的实现方式中,在本实施例中,在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,可以但不限于显示更多样的游戏信息,如参与目标虚拟游戏的至少一个模拟对象的基本信息(如模拟对象的名称、模拟对象的历史战绩等)、目标虚拟游戏的进程实况信息(如虚拟角色当前所持的虚拟资源、虚拟角色当前所配置的道具、虚拟角色当前的对战信息等)、目标虚拟游戏的进程预测信息(如目标虚拟游戏的对战结果的预测信息、目标虚拟游戏的进程发展的预测信息等)。
在一种可能的实现方式中,在本实施例中,执行预测信息的显示方式可以但不限于与候选操作、虚拟角色等信息相关,如候选操作为移动操作时,执行预测信息可以但不限于以各个方向的选中优先级的方式进行显示,如图4所示,执行预测信息402包括移动操作可执行的各个方向,且每个方向的选中优先级可以但不限于以长度的方式进行体现,如长度的长短与选中优先级呈正向关系,即越长的方向表示该方向的选中优先级越高,或可理解为最长的方向是移动操作最有可能移动的方向。
再如候选操作为攻击操作时,执行预测信息可以但不限于以各个目标对象的选中优先级的方式进行显示,进一步如图5所示,执行预测信息502包括攻击操作可选中执行的各个目标对象,且每个目标对象的选中优先级可以但不限于以阴影的方式进行体现,如阴影的显示面积与选中优先级呈正向关系,即阴影的显示面积越大表示该目标对象的选中优先级越高;具体的,如目标对象B的阴影显示面积大于目标对象A的阴影显示面积、以及目标对象C的阴影显示面积,进而目标对象B的选中优先级最高,或理解为执行攻击操作时对目标对象B执行的概率最大。
需要说明的是,在目标虚拟游戏的运行过程中,通过对当前时间单位的运行画面的检测,以获取当前时间单位的对局参考数据,再将对局参考数据计算执行预测信息的决策过程进行显示,进而将人工智能参与虚拟游戏的决策过程进行直观显示,提高了信息的显示全面度。
进一步举例说明,在一种可能的实现方式中例如图6所示,显示目标虚拟游戏在第一时间单位对应的运行画面604,如图6中的(a)所示,其中,目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,模拟对象为人工智能驱动的、用于模拟操控虚拟角色602参与目标虚拟游戏的虚拟对象;
再如图6中的(b)所示,获取运行画面604对应的对局参考数据606,其中,对局参考数据606为虚拟角色602在第一时间单位参与目标虚拟游戏时反馈出的对局数据;基于对局参考数据606,显示虚拟角色602在第二时间单位待执行的候选操作(如操作A、操 作B、以及操作C)对应的执行预测信息608,其中,执行预测信息608用于为模拟对象在第二时间单位待发起的操控指令提供与对局参考数据606相关的辅助参考,操控指令为模拟对象操控虚拟角色602执行候选操作时所发起的指令,第二时间单位在第一时间单位之后;
此外,模拟对象根据执行预测信息608进行决策,如在第二时间单位发起操作A对应的操控指令,以操控虚拟角色602执行操作A(对敌方角色进行攻击),具体如图6中的(c)所示;以及,可以但不限于再利用目标虚拟游戏在第二时间单位对应的运行画面,获取最新的对局参考数据,并基于该最新的对局参考数据获取第二时间单位下一时间单位的执行预测信息,原理相似,在此不做冗余阐述。
通过本申请提供的实施例,在目标虚拟游戏的运行过程中,可以显示目标虚拟游戏在第一时间单位对应的运行画面,其中,第一时间单位可以是当前时间单位,上述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,上述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与上述目标虚拟游戏的虚拟对象。通过对上述运行画面的检测,以获取上述运行画面对应的对局参考数据,其中,上述对局参考数据为上述虚拟角色在上述第一时间单位参与上述目标虚拟游戏时反馈出的对局数据。接着,基于上述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被上述虚拟角色在第二时间单位执行的操作,上述执行预测信息用于为待发起的操控指令提供与上述对局参考数据相关的辅助参考,上述待发起的操控指令为有待于被上述模拟对象在上述第二时间单位发起的,且用于操控上述虚拟角色执行上述候选操作的指令,上述第二时间单位在上述第一时间单位之后。也就是说,执行预测信息可以作为决定执行哪个候选操作的依据,通过对执行预测信息的显示,便于用户理解通过人工智能决策出对应候选操作的原因,帮助观众快速了解人工智能的决策思路,达到了将人工智能参与虚拟游戏的决策过程进行直观显示的目的,从而实现了提高信息的显示全面度的技术效果,进而解决了信息的显示不够全面的技术问题。相应的,提高了虚拟游戏的观看体验和人工智能的可解释性。
在一种可能的实现方式中,显示待执行的候选操作对应的执行预测信息,包括:
显示待执行的至少两个候选操作的第一概率分布信息,其中,第一概率分布信息用于预测虚拟角色在第二时间单位执行至少两个候选操作中各个候选操作的概率。
在一种可能的实现方式中,在本实施例中,第一概率分布信息可以但不限于显示在预测信息列表中,其中,预测信息列表可以但不限于配置有各个虚拟角色所关联的各类候选操作的概率分布信息。
在一种可能的实现方式中,在本实施例中,为提高显示效率,在待显示的候选操作的数量大于第一数量的情况下,优先显示概率较大的、第一数量的候选操作,如候选操作1的概率70%、候选操作2的概率50%、候选操作3的概率20%,则优先显示候选操作1以及候选操作2。
进一步举例说明,在一种可能的实现方式中例如图7所示,显示目标虚拟游戏在第一时间单位对应的运行画面702以及预测信息列表704,并在预测信息列表704上显示虚拟角色(如虚拟角色A、虚拟角色B、以及虚拟角色C)待执行的至少两个候选操作的第一 概率分布信息,其中,第一概率分布信息用于预测虚拟角色在第二时间单位执行至少两个候选操作中的各个候选操作的概率;具体的,显示虚拟角色A、虚拟角色B、以及虚拟角色C关联的移动操作(如第一方向的移动操作、第二方向的移动操作等)的概率,以虚拟角色A为例说明,第一方向的移动操作的概率为“44.7%”、第二方向的移动操作的概率为“16.5%”等;
再者,本实施例还可基于图7所示场景,再如图8所示,显示虚拟角色A、虚拟角色B、以及虚拟角色C关联的技能释放操作(如第一技能的释放操作、第二技能向的释放操作等)的概率,以虚拟角色A为例说明,A1技能的释放操作的概率为“54.7%”、A2技能的释放操作的概率为“16.5%”、A3技能的释放操作的概率为“24.7%”、A4技能的释放操作的概率为“12.57%”等;
以及,本实施例还可显示虚拟角色关联的道具配置操作(如道具1的配置操作、道具2的配置操作等)的概率,其中,道具配置操作可以但不限于包括替换、拆卸、安装、购买、售卖、存入第一虚拟容器、从第二虚拟容器中取出等。
通过本申请提供的实施例,显示待执行的至少两个候选操作的第一概率分布信息,其中,第一概率分布信息用于预测虚拟角色在第二时间单位执行至少两个候选操作中的各个候选操作的概率,进而达到了利用概率分布直观显示信息的目的,从而实现了提高信息的显示直观度的技术效果。
在一种可能的实现方式中,显示待执行的候选操作对应的执行预测信息,包括:
显示虚拟角色对至少两个指向对象执行候选操作的第二概率分布信息,其中,第二概率分布信息用于预测虚拟角色在第二时间单位对至少两个指向对象中各个指向对象执行候选操作的概率。
在一种可能的实现方式中,在本实施例中,第二概率分布信息可以但不限于显示在预测信息列表中,其中,预测信息列表可以但不限于配置有各个虚拟角色所关联的各个指向对象的概率分布信息。
在一种可能的实现方式中,在本实施例中,为提高显示效率,在待显示的候选操作的数量大于第二数量的情况下,优先显示概率较大的、第二数量的指向对象,如指向对象1的概率70%、指向对象2的概率50%、指向对象3的概率20%,则优先显示指向对象1以及指向对象2。进一步举例说明,在一种可能的实现方式中基于图7所示场景,继续例如图9所示,显示目标虚拟游戏在第一时间单位对应的运行画面702以及预测信息列表704,并在预测信息列表704上显示虚拟角色(如虚拟角色A、虚拟角色B、以及虚拟角色C)对至少两个指向对象执行候选操作的第二概率分布信息,其中,第二概率分布信息用于预测虚拟角色在第二时间单位对至少两个指向对象中的各个指向对象执行候选操作的概率;具体的,显示虚拟角色A、虚拟角色B、以及虚拟角色C关联的指向对象的概率,如虚拟角色A关联的指向对象B(虚拟角色B)、指向对象C(虚拟角色C)、虚拟角色B关联的指向对象A(虚拟角色A)、指向对象C(虚拟角色C)、虚拟角色C关联的指向对象A(虚拟角色A)、指向对象B(虚拟角色B),以虚拟角色A为例说明,攻击操作对指 向对象B(虚拟角色B)的执行概率为“54.7%”、攻击操作对指向对象C(虚拟角色C)的执行概率为“16.5%”。
通过本申请提供的实施例,显示虚拟角色对至少两个指向对象执行候选操作的第二概率分布信息,其中,第二概率分布信息用于预测虚拟角色在第二时间单位对至少两个指向对象中的各个指向对象执行候选操作的概率,进而达到了利用概率分布直观显示信息的目的,从而实现了提高信息的显示直观度的技术效果。
在一种可能的实现方式中,显示目标虚拟游戏在第一时间单位对应的运行画面,包括:在观战界面中的第一界面区域内显示运行画面;
在一种可能的实现方式中,显示待执行的候选操作对应的执行预测信息,包括:在观战界面中的第二界面区域内显示执行预测信息。
需要说明的是,在观战界面中的第一界面区域内显示运行画面;在观战界面中的第二界面区域内显示执行预测信息。
进一步举例说明,在一种可能的实现方式中例如图10所示,在观战界面1002中的第一界面区域(观战界面1002的中部区域)内显示运行画面,以及在观战界面1002中的第二界面区域内显示执行预测信息(如阵营A的目标概率分布、阵营A的走位概率分布、阵营B的目标概率分布、阵营B的走位概率分布等);此外,观战界面1002中还显示有基础游戏对战数据、胜率预测、阵营A的经济组成、阵营A的伤害占比、阵营B的经济组成、阵营B的伤害占比、小地图等内容。
通过本申请提供的实施例,在观战界面中的第一界面区域内显示运行画面;在观战界面中的第二界面区域内显示执行预测信息,进而达到了在观战界面中显示更加全面的信息的目的,从而实现了提高信息的显示全面度的技术效果。
在一种可能的实现方式中,在观战界面中的第一界面区域内显示运行画面,包括:在第一界面区域中的第一子区域内显示目标虚拟游戏在第一时间单位对应的运行主画面、以及在第一界面区域中的第二子区域内显示目标虚拟游戏在第一时间单位对应的运行子画面,其中,运行主画面为目标虚拟游戏的虚拟场景内的实时画面,运行子画面为虚拟场景的缩略画面;
在一种可能的实现方式中,还可以在运行子画面显示执行预测信息;在观战界面中的第二界面区域内显示执行预测信息,包括:在第二界面区域中的第三子区域内显示执行预测信息。
在一种可能的实现方式中,在本实施例中,运行画面与执行预测信息可以但不限于显示在相同或不同的界面区域内,或者说,在观战界面中的第一界面区域内显示运行画面、以及在观战界面中的第二界面区域内显示执行预测信息,但并不局限于运行画面与执行预测信息智能显示在不同的界面区域内,还可以但不限于在同一界面区域内进行显示。
在一种可能的实现方式中,在本实施例中,在AI对战时,可以但不限于在小地图上展示每个虚拟角色的实时位置,在每个虚拟角色的头像上,通过箭头来表示虚拟角色走位概率分布中概率最高的方向;此外,还可以但不限于支持同时展示概率最高的前两个方向,来帮助观战用户在小地图(运行子画面)上快速了解到虚拟角色的决策信息,更好的理解 AI的决策思路。再者,当某一方有两名及以上虚拟角色的第一概率目标为敌方同一虚拟角色时,可以但不限于将该事件判定为集火,并将该事件在小地图上进行展示,使得用户可以直观了解到AI的意图。
进一步举例说明,在一种可能的实现方式中例如图11所示,在观战界面1102中显示目标虚拟游戏在第一时间单位对应的运行主画面1104、以及在观战界面1102中显示目标虚拟游戏在第一时间单位对应的运行子画面1106,其中,运行主画面1104为目标虚拟游戏的虚拟场景内的实时画面,运行子画面1106为虚拟场景的缩略画面,缩略画面上显示有虚拟角色的角色位置标识;此外,还可以但不限于在运行子画面1106、以及观战界面1102中显示执行预测信息。
通过本申请提供的实施例,在第一界面区域中的第一子区域内显示目标虚拟游戏在第一时间单位对应的运行主画面、以及在第一界面区域中的第二子区域内显示目标虚拟游戏在第一时间单位对应的运行子画面,其中,运行主画面为目标虚拟游戏的虚拟场景内的实时画面,运行子画面为虚拟场景的缩略画面;在运行子画面显示执行预测信息、以及第二界面区域中的第三子区域内显示执行预测信息,进而达到了将信息高效地显示在观战界面的目的,从而实现了提高信息的显示效率的技术效果。
在一种可能的实现方式中,当缩略画面上显示有虚拟角色的角色位置标识,执行预测信息包括移动方向标识时,在运行子画面显示执行预测信息,包括:
在运行子画面中角色位置标识的关联位置处显示移动方向标识,其中,移动方向标识用于为模拟对象在第二时间单位待发起的移动指令提供方向参考,移动指令用于指示操控虚拟角色进行移动。
在一种可能的实现方式中,在本实施例中,在角色位置标识的关联位置处显示移动方向标识可以但不限于理解为将执行预测信息以结合角色位置标识的方式显示在运行子画面所在的第二子区域内。
进一步举例说明,在一种可能的实现方式中基于图11所示,继续例如图12所示,在运行子画面1106上显示执行预测信息,具体的在角色位置标识1202的关联位置处显示移动方向标识1204,其中,移动方向标识1204用于为模拟对象在第二时间单位待发起的移动指令提供方向参考,移动指令用于指示操控虚拟角色进行移动。
通过本申请提供的实施例,在运行子画面中角色位置标识的关联位置处显示移动方向标识,进而达到了利用运行子画面的信息便捷,更直观地显示执行预测信息的目的,从而实现了提高信息的显示直观性的技术效果。
在一种可能的实现方式中,缩略画面上显示有虚拟角色的角色位置标识,执行预测信息包括操作轨迹标识,在运行子画面显示执行预测信息,包括:
在操作轨迹标识指示的目标候选操作的数量达到预设阈值的情况下,在运行子画面中目标角色位置标识的关联位置处突出显示目标候选操作关联的操作轨迹标识,其中,操作轨迹标识用于为模拟对象在第二时间单位待发起的操控指令提供指向参考,目标候选操作为指向对象相同的候选操作,目标角色位置标识为在第二时间单位待发起目标候选操作的虚拟角色对应的角色位置标识。
在一种可能的实现方式中,在本实施例中,为提高执行预测信息的显示效率,在执行预测信息指示的目标候选操作的数量达到预设阈值的情况下,在目标角色位置标识的关联位置处突出显示目标候选操作关联的操作轨迹标识,如执行预测信息指示超过预设阈值数量的虚拟角色都将对同一虚拟角色执行攻击操作时,将在对应的角色位置标识的关联位置处突出显示该攻击操作关联的操作轨迹标识。
进一步举例说明,在一种可能的实现方式中基于图11所示,继续例如图13所示,在执行预测信息指示的目标候选操作的数量达到预设阈值的情况下,在运行子画面1106上突出显示满足特定条件的执行预测信息(例如目标候选操作关联的操作轨迹标识),具体的在目标角色位置标识的关联位置处突出显示目标候选操作关联的操作轨迹标识1302,其中,操作轨迹标识1302用于为模拟对象在第二时间单位待发起的操控指令提供指向参考,目标候选操作为指向对象相同的候选操作,目标角色位置标识为在第二时间单位待发起目标候选操作的虚拟角色对应的角色位置标识。
通过本申请提供的实施例,在操作轨迹标识指示的目标候选操作的数量达到预设阈值的情况下,在目标角色位置标识的关联位置处突出显示目标候选操作关联的操作轨迹标识,进而达到了突出显示满足特定条件的执行预测信息的目的,从而实现了提高执行预测信息的显示效率的技术效果。
在一种可能的实现方式中,显示待执行的候选操作对应的执行预测信息,包括以下至少之一:
S1,若待执行的候选操作为待执行的指向操作,显示待执行的指向操作对应的执行预测信息,其中,待执行的指向操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的操控指令提供指向参考,指向操作用于确定操控指令的指向对象;
S2,若待执行的候选操作为待执行的至少两个候选操作,显示待执行的至少两个候选操作对应的执行预测信息,其中,待执行的至少两个候选操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的至少两个操控指令提供选定参考,至少两个操控指令中的操控指令与至少两个候选操作中的候选操作一一对应;
S3,若待执行的候选操作为待执行的移动操作,显示待执行的移动操作对应的执行预测信息,其中,待执行的移动操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的移动指令提供方向参考;
S4,若待执行的候选操作为待执行的攻击操作,显示待执行的攻击操作对应的执行预测信息,其中,待执行的攻击操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的攻击指令提供指向参考;
S5,若待执行的候选操作为待执行的配置操作,显示待执行的配置操作对应的执行预测信息,其中,待执行的配置操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的配置指令提供指向参考,配置操作用于确定配置指令的指向道具。
在一种可能的实现方式中,在本实施例中,在AI对战时,可以但不限于通过游戏当前帧的对战数据,计算出即将要攻击的目标列表及对应的攻击概率,最终AI会攻击概率最 高的目标;此外,为了让数据直观易懂,可以但不限于仅展示了目标列表中概率最高的前两位,其中,目标列表中记录了至少两个指向对象。
在一种可能的实现方式中,在本实施例中,在AI对战时,可以但不限于通过游戏当前帧的对战数据,计算出即将要执行的候选操作及对应的执行概率,最终AI会执行概率最高的候选操作;此外,为了让数据直观易懂,可以但不限于仅展示了候选操作中概率最高的前两位,其中,将要执行的候选操作包括至至少两个候选操作。
在一种可能的实现方式中,在本实施例中,在AI对战时,可以但不限于通过游戏当前帧的对战数据,计算出即将要前进的方向及对应的概率,最终AI会向概率最高的方向前进;此外,为了让数据直观易懂,可以但不限于仅展示了方向列表中概率最高的前两个,其中,即将要前进的方向包括至少两个移动方向。
在一种可能的实现方式中,在本实施例中,在AI对战时,可以但不限于通过游戏当前帧的对战数据,计算出即将配置的虚拟道具及对应的概率,最终AI会对概率最高的虚拟道具进行配置;此外,为了让数据直观易懂,可以但不限于仅展示了虚拟道具中概率最高的前两个,其中,即将配置的虚拟道具包括至少两个指向道具。
在一种可能的实现方式中,显示待执行的候选操作对应的执行预测信息,包括:
S1,在目标虚拟游戏的画面视角为第一虚拟角色的角色视角的情况下,显示第一虚拟角色对应的执行预测信息;
S2,响应于对目标虚拟游戏的画面视角的切换指令,将目标虚拟游戏的画面视角切换为第二虚拟角色的角色视角,并显示第二虚拟角色对应的执行预测信息。
在一种可能的实现方式中,在本实施例中,为提高执行预测信息的显示准确性,观战用户可通过切换不同虚拟角色的角色视角,以调整目标虚拟游戏的画面视角,还可对应调整不同虚拟角色对应的执行预测信息的显示。
在一种可能的实现方式中,在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,方法还包括以下至少之一:
S1,显示至少一个模拟对象的对局基础信息,其中,对局基础信息为至少一个模拟对象中的各个模拟对象的基础信息;
S2,显示目标虚拟游戏在第一时间单位对应的对局即时信息,其中,对局即时信息为目标虚拟游戏在第一时间单位运行时产出的即时信息;
S3,显示目标虚拟游戏在第一时间单位对应的对局历史信息,其中,对局历史信息为目标虚拟游戏在第一时间单位运行之前产出的历史信息;
S4,显示目标虚拟游戏在第一时间单位对应的对局预测信息,其中,对局预测信息为至少一个模拟对象参与目标虚拟游戏的对局结果的预测信息。进一步举例说明,在一种可能的实现方式中以(赛事)观战界面的显示为例说明;在一种可能的实现方式中,观战界面由游戏画面和数据模块组成,游戏开始后,观战用户可以通过点击虚拟角色对应的标识以切换不同虚拟角色的视角。
在一种可能的实现方式中,在本实施例中,数据模块可参考图10所示理解为显示在观战界面上的数据,例如包括基础游戏对战数据、胜率预测、经济组成和伤害占比、目标概率分布、走位概率分布、小地图等。
在一种可能的实现方式中,在本实施例中,基础数据可以但不限于参考真人对战时的数据呈现,AI对战时也提取到相应的基础数据进行呈现,包括游戏数据,团队数据和虚拟角色基础数据等。
在一种可能的实现方式中,在本实施例中,游戏数据可以但不限于包括对战画面(对局即时信息),对战时长(对局历史信息),可以但不限于通过对战画面实时展示多个AI在目标虚拟游戏中的对战状况。
在一种可能的实现方式中,在本实施例中,团队数据可以但不限于包括AI模型所属的团队名称(对局基础信息),KDA(对局历史信息),击败暴君数量(对局历史信息),经济组成(对局历史信息),对虚拟角色伤害占比(对局历史信息),其中,经济组成和对虚拟角色伤害占比用于帮助观战用户理解双方AI型在对战时的运营思路,进而体现参战用户在训练AI时不同的侧重点,经济组成可以但不限用于展示阵营总经济(不包含自然增长经济)。
在此基础上,还可以如图14所示,将总经济划分为击败虚拟角色、击败NPC角色(如虚拟小兵、虚拟野怪等)和NPC建筑(如虚拟防御塔、虚拟水晶建筑等)多个来源,并在对战界面上展示多个来源各自与总经济的占比。
此外,还可以在图14的基础上,如图15所示,对虚拟角色伤害占比进行展示,如展示对敌方虚拟角色造成的总伤害,及其中每一名虚拟角色对敌方虚拟角色造成伤害的占比。
在一种可能的实现方式中,在本实施例中,数据模块还可以但不限于包括虚拟角色基础数据,如虚拟角色头像、生命值、召唤师技能及状态,进一步基于图15所示场景,继续例如图16所示,显示虚拟角色基础数据1602,包括虚拟角色头像、虚拟角色名称、虚拟角色的KDA等;此外,在点击虚拟角色头像后,游戏画面可以但不限于对应调整为该虚拟角色的视角。
在一种可能的实现方式中,在本实施例中,数据模块还可以但不限于包括胜率预测(对局预测信息),如根据AI双方的实时对战数据进行胜率预测并展示,进一步基于图14所示场景,继续例如图17所示,根据AI双方的实时对战数据进行胜率预测,并展示胜率预测的结果信息1702,如AI 1的胜率为69%,AI 2的胜率为31%。
通过本申请提供的实施例,显示至少一个模拟对象的对局基础信息,其中,对局基础信息为至少一个模拟对象中的各个模拟对象的基础信息;显示目标虚拟游戏在第一时间单位对应的对局即时信息,其中,对局即时信息为目标虚拟游戏在第一时间单位运行时产出的即时信息;显示目标虚拟游戏在第一时间单位对应的对局历史信息,其中,对局历史信息为目标虚拟游戏在第一时间单位运行之前产出的历史信息;显示目标虚拟游戏在第一时间单位对应的对局预测信息,其中,对局预测信息为至少一个模拟对象参与目标虚拟游戏的对局结果的预测信息,进而达到了的目的,从而实现了的技术效果。
在一种可能的实现方式中,显示目标虚拟游戏在第一时间单位对应的对局预测信息,包括:
S1,获取目标虚拟游戏在运行过程中的对局画面,其中,对局画面包括运行画面;
S2,利用对局画面获取局部对局状态信息和整体对局状态信息,其中,局部对局状态信息用于表示参与目标虚拟游戏的每个虚拟角色在目标虚拟游戏中的对局状态,整体对局状态信息用于表示目标虚拟游戏在第一时间单位的对局状态;
S3,基于局部对局状态信息,通过对局预测模型得到第一识别结果,以及基于整体对局状态信息,通过对局预测模型得到第二识别结果,其中,第一识别结果用于表示参与目标虚拟游戏的每个虚拟角色对对局结果的贡献,第二识别结果用于表示目标虚拟游戏在第一时间单位的对局状态对对局结果的贡献;
S4,对第一识别结果和第二识别结果进行拟合,得到评估函数值,其中,评估函数值用于从对象整体以及局部性能评估至少一个模拟对象在目标虚拟游戏中的对局进度;
S5,基于评估函数值获取并显示对局预测信息。
在一种可能的实现方式中,在本实施例中,可以但不限于利用一个以当前游戏状态(对局画面)为输入,评估函数值为输出的监督学习模型(对局预测模型),对目标虚拟游戏在运行过程中的对局画面进行处理,以得到用于获取对局预测信息的评估函数值;
在一种可能的实现方式中,在本实施例中,对局预测模型可以但不限于如图18所示,被分为两部分子结构,具体首先个体(Ind)部分输入为当前状态中的每个个体的状态(如个体部分1802中的个体特征1、个体特征2、个体特征3),利用全连接层进行处理,输出为每个个体对游戏局面的贡献(如贡献集合1806中的个体贡献1、个体贡献2、个体贡献3),全局(Glo)部分输入为当前状态中的全局状态(如整体部分1804中的整体特征),利用全连接层进行处理,输出为全局状态对游戏局面的贡献(如贡献集合1806中的整体贡献);最后,通过整合两个子结构的输出来预测上述评估函数值1808。
在一种可能的实现方式中,基于评估函数值获取并显示对局预测信息,包括:
S1,若目标虚拟游戏为至少两个对立阵营的模拟对象参与的虚拟游戏,利用评估函数值确定至少两个对立阵营中各个对立阵营参与目标虚拟游戏的预测剩余时长;
S2,基于预测剩余时长获取各个对立阵营对目标虚拟游戏的预测胜率,其中,预测胜率与预测剩余时长呈反比;
S3,将预测胜率作为对局预测信息进行显示。
在一种可能的实现方式中,在本实施例中,评估函数值可以但不限用于评价游戏团队间的相对优势,假设目标虚拟游戏为A队与B队之间对战的游戏,则评估函数值可以但不限用于反应了A队相对于B队的优势,或B队相对于A队的优势,具体可参考下述公式(1)以及公式(2):

其中,DE代表折扣评估值,其绝对值与游戏的剩余时间成反比,即优势越大表示获胜的可能性越大,这将导致结束游戏的时间更短,R代表奖励(或可理解为游戏结果汇报,如 胜利记为1、失败记为-1),t代表游戏剩余时长,r表示未来奖励和当前奖励之间的重要性差异。
在一种可能的实现方式中,获取运行画面对应的对局参考数据,包括:基于运行画面,通过图像识别模型中的第一网络结构得到运行画面的图像特征,其中,对局参考数据包括图像特征,图像识别模型为利用样本数据进行训练的、用于识别图像的神经网络模型。
在一种可能的实现方式中,可以将运行画面输入至图像识别模型中的第一网络结构,第一网络结构用于提取图像特征,从而得到运行画面的图像特征。为了实现图像特征的提取,第一网络结构可以但不限于包括输入层、卷积层、池化层、全连接层等,其中,卷积层可以但不限于由若干卷积单元组成,每个卷积单元的参数都是通过反向传播算法最佳化得到的,卷积运算的目的是提取输入的不同特征,第一层卷积层可能只能提取一些低级的特征如边缘、线条和角等层级,更多层的网路能从低级特征中迭代提取更复杂的特征;池化层可以但不限于在卷积层之后,同样由多个特征面组成,它的每一个特征面对应于其上一层的一个特征面,不会改变特征面的个数;全连接层可以但不限于是每一个结点都与上一层的所有结点相连,用来把前边提取到的特征综合起来,由于其全相连的特性,一般全连接层的参数也是最多的。
在一种可能的实现方式中,基于对局参考数据,显示待执行的候选操作对应的执行预测信息,包括:基于运行画面的图像特征,通过图像识别模型中的第二网络结构得到识别结果,其中,执行预测信息包括识别结果。
在一种可能的实现方式中,可以将运行画面的图像特征输入至图像识别模型的第二网络结构,第二网络结构用于基于第一网络结构提取到的图像特征进行分类,从而得到识别结果。第二网络结构可以但不限于包括输出层,其中,输出层使用的激活函数可以但不限于包括Sigmoid函数、tanh函数等,激活函数可以但不限于由单个神经元的基本结构由线性单元和非线性单元两部分组成,
在一种可能的实现方式中,基于运行画面,通过图像识别模型中的第一网络结构得到运行画面的图像特征,包括:
S1,利用第一网络结构中的卷积层,对运行画面进行图像识别,得到运行画面对应的至少两个画面特征;
S2,利用第一网络结构中的全连接层,对至少两个画面特征进行特征串联,得到图像特征。
在一种可能的实现方式中,在本实施例中,网络使用卷积对图像特征,矢量特征及游戏状态信息做特征编码,再利用全连接层(Full Connection,FC)串联所有特征编码得到状态编码。
在一种可能的实现方式中,基于运行画面的图像特征,通过图像识别模型中的第二网络结构得到识别结果,包括:
S1,将图像特征映射至第二网络结构中的注意力机制层,得到映射结果;
S2,基于映射结果,通过第二网络结构中的输出层得到识别结果。
需要说明的是,本申请实施例中的第二网络结构包括注意力机制层和输出层,可以将图像特征映射至第二网络结构中的注意力机制层,得到映射结果;将映射结果输入至第二网络结构中的输出层,从而得到识别结果。
进一步举例说明,在一种可能的实现方式中例如图19所示,基于深度强化学习框架训练,通过actor-critic网络来对目标虚拟游戏中的动作控制依赖关系进行建模;首先,网络使用卷积对图像特征,矢量特征及游戏状态信息做特征编码,再利用全连接层(FC)串联所有特征编码得到状态编码,然后,状态编码由LSTM循环单元映射到hLSTM(注意力机制层),将hLSTM输入到FC层以预测最终的动作输出,包括移动操作、攻击操作、技能释放操作、指向对象等;此外,为了辅助AI在目标虚拟游戏的对战中做出更正确的选择,网络结构设计中引入了目标注意力机制。该机制将hLSTM的FC输出作为query,将所有单元编码的堆栈作为key,计算目标注意力,即AI对当前游戏状态中各个目标的关注度。再通过可视化AI对各个目标的关注度,可以更直观的理解AI在当前状态下的决策。
可以理解的是,actor-critic是深度强化学习的一种算法,该算法定义两个网络,分别为策略网络(Actor)和评价网络(Critic),形成actor-critic网络。其中Actor主要是用来训练策略,找出最优的动作,而Critic用来给动作进行打分,从而引导出最好的动作。LSTM是长短期记忆网络(Long Short-Term Memory),hLSTM是异构长短期记忆网络(heterogeneous Long Short-Term Memory)。
在一种可能的实现方式中,为方便理解,本实施例假设将上述信息处理方法应用在AI参与的、多人在线战术竞技游戏(Multiplayer Online Battle Arena,简称MOBA)类游戏的对战场景中,整体流程如图20所示:
S2002,AI模型在游戏环境中进行对战;
S2004,在对战的同时提取对战数据;
S2006,生成相应的对战文件;
S2008,然后由前端加载对局画面,并在对应的模块渲染对应的数据;
其中,在AI模型实时对战进行数据提取包括提取AI对战时的经济、伤害等数据,进行可视化展示,并基于相关数据进行胜率预测;提取AI对战时的决策数据(move和target),结构化后以清晰易懂的方式呈现给观战用户。
在一种可能的实现方式中,在本实施例中,胜率预测可以辅助观战用户理解游戏画面,通过胜率变化,即使不熟悉游戏的观战用户也可以大致猜测出游戏的胜利目标,同时,动态变化的胜率预期,也可以增加游戏的戏剧性张力。
在一种可能的实现方式中,在本实施例中,在有得分机制的游戏中,通常可以很容易地从得分中判断出哪个球员或团队有优势。但是像MOBA类游戏的设计是非常复杂的,整个游戏过程中有很多变量在变化。因此,在如此庞大的知识领域中很难评估实时游戏情况。传统上,相关技术通过直觉、或游戏经验等模糊的方法来评估相对优势,但无法提出统一的标准来衡量游戏团队间的相对优势;
在一种可能的实现方式中,在本实施例中,在MOBA游戏中,当前状态代表特定时间片的游戏局面,包含个体状态和全局状态。个体状态包括团队虚拟角色的等级、经济及存 活状态等,全局状态包括兵线、防御塔状态等。这些用于表示当前游戏状态的信息都可以在游戏记录中找到,例如回放文件。
在一种可能的实现方式中,在本实施例中,数据的内容不仅限于基础数据、目标概率分布、走位概率分布和小地图,还可根据游戏类型增加更多维度的数据展示,且数据的内容显示方式为可视化的形式,如折线图、热度图等。呈现的终端设备不仅限于PC端,还可以是移动设备、大屏设备等。操作互动的方式不仅限于鼠标键盘,还可以是手势、声控等。
通过本申请提供的实施例,AI游戏对战是强化学习模型的比拼,与真人游戏对战不同,更侧重AI模型的训练思路和算法的优化,不参杂情感、情绪、反应等人为因素,本申请通过对AI模型决策的过程和数据进行呈现,并结合比赛画面实时展示给观战用户,创新地提出独特的AI在MOBA类游戏中对战的实时呈现方式,让AI具有可解释性的同时,有效地提升了AI游戏对战的观赏性。
可以理解的是,在本申请的具体实施方式中,涉及到用户信息等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户单独许可或者单独同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述信息处理方法的信息处理装置。如图21所示,该装置包括:
第一显示单元2102,用于显示目标虚拟游戏在第一时间单位对应的运行画面,其中,目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与目标虚拟游戏的虚拟对象;
获取单元2104,用于获取运行画面对应的对局参考数据,其中,对局参考数据为虚拟角色在第一时间单位参与目标虚拟游戏时反馈出的对局数据;
第二显示单元2106,用于基于对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被虚拟角色在第二时间单位执行的操作,执行预测信息用于为待发起的操控指令提供与对局参考数据相关的辅助参考,待发起的操控指令为有待于被模拟对象在第二时间单位发起的,且用于操控虚拟角色执行候选操作的指令,第二时间单位在第一时间单位之后。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
通过本申请提供的实施例,在目标虚拟游戏的运行过程中,可以显示目标虚拟游戏在第一时间单位对应的运行画面,其中,第一时间单位可以是当前时间单位,上述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,上述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与上述目标虚拟游戏的虚拟对象。通过对上述运行画面的检测,以获取上述 运行画面对应的对局参考数据,其中,上述对局参考数据为上述虚拟角色在上述第一时间单位参与上述目标虚拟游戏时反馈出的对局数据。接着,基于上述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被上述虚拟角色在第二时间单位执行的操作,上述执行预测信息用于为待发起的操控指令提供与上述对局参考数据相关的辅助参考,上述待发起的操控指令为有待于被上述模拟对象在上述第二时间单位发起的,且用于操控上述虚拟角色执行上述候选操作的指令,上述第二时间单位在上述第一时间单位之后。也就是说,执行预测信息可以作为决定执行哪个候选操作的依据,通过对执行预测信息的显示,便于用户理解通过人工智能决策出对应候选操作的原因,帮助观众快速了解人工智能的决策思路,达到了将人工智能参与虚拟游戏的决策过程进行直观显示的目的,从而实现了提高信息的显示全面度的技术效果,进而解决了信息的显示不够全面的技术问题。相应的,提高了虚拟游戏的观看体验和人工智能的可解释性。
在一种可能的实现方式中,第二显示单元2106,包括:
第一显示模块,用于显示待执行的至少两个候选操作的第一概率分布信息,其中,第一概率分布信息用于预测虚拟角色在第二时间单位执行至少两个候选操作中各个候选操作的概率。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第二显示单元2106,包括:
第二显示模块,用于显示虚拟角色对至少两个指向对象执行候选操作的第二概率分布信息,其中,第二概率分布信息用于预测虚拟角色在第二时间单位对至少两个指向对象中各个指向对象执行候选操作的概率。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第一显示单元2102,包括:第三显示模块,用于在观战界面中的第一界面区域内显示运行画面;
第二显示单元2106,包括:第四显示模块,用于在观战界面中的第二界面区域内显示执行预测信息。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第三显示模块,包括:第一显示子模块,用于在第一界面区域中的第一子区域内显示目标虚拟游戏在第一时间单位对应的运行主画面、以及在第一界面区域中的第二子区域内显示目标虚拟游戏在第一时间单位对应的运行子画面,其中,运行主画面为目标虚拟游戏的虚拟场景内的实时画面,运行子画面为虚拟场景的缩略画面;
第四显示模块,包括:第二显示子模块,用于在第二界面区域中的第三子区域内显示执行预测信息;
第二显示子模块,还用于在运行子画面显示执行预测信息。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,缩略画面上显示有虚拟角色的角色位置标识,执行预测信息包括移动方向标识,第二显示子模块,包括:
第一显示子单元,用于在运行子画面中角色位置标识的关联位置处显示移动方向标识,其中,移动方向标识用于为模拟对象在第二时间单位待发起的移动指令提供方向参考,移动指令用于指示操控虚拟角色进行移动。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,缩略画面上显示有虚拟角色的角色位置标识,执行预测信息包括操作轨迹标识,装置包括:
第二显示子单元,用于,在操作轨迹标识指示的目标候选操作的数量达到预设阈值的情况下,在运行子画面中目标角色位置标识的关联位置处突出显示目标候选操作关联的操作轨迹标识,其中,操作轨迹标识用于为模拟对象在第二时间单位待发起的操控指令提供指向参考,目标候选操作为指向对象相同的候选操作,目标角色位置标识为待在第二时间单位发起目标候选操作的虚拟角色对应的角色位置标识。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第二显示单元2106,包括以下至少之一:
第五显示模块,用于若待执行的候选操作为待执行的指向操作,显示待执行的指向操作对应的执行预测信息,其中,待执行的指向操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的操控指令提供指向参考,指向操作用于确定操控指令的指向对象;
第六显示模块,用于若待执行的候选操作为待执行的至少两个候选操作,显示待执行的至少两个候选操作对应的执行预测信息,其中,待执行的至少两个候选操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的至少两个操控指令提供选定参考,至少两个操控指令中的操控指令与待执行的至少两个候选操作中的候选操作一一对应;
第七显示模块,用于若待执行的候选操作为待执行的移动操作,显示待执行的移动操作对应的执行预测信息,其中,待执行的移动操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的移动指令提供方向参考;
第八显示模块,用于若待执行的候选操作为待执行的攻击操作,显示待执行的攻击操作对应的执行预测信息,其中,待执行的攻击操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的攻击指令提供指向参考;
第九显示模块,用于若待执行的候选操作为待执行的配置操作,显示待执行的配置操作对应的执行预测信息,其中,待执行的配置操作对应的执行预测信息用于为模拟对象在第二时间单位待发起的配置指令提供指向参考,配置操作用于确定配置指令的指向道具。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第二显示单元2106,包括:
第十显示模块,用于在目标虚拟游戏的画面视角为第一虚拟角色的角色视角的情况下,显示第一虚拟角色对应的执行预测信息;
第十一显示模块,用于响应于对目标虚拟游戏的画面视角的切换指令,将目标虚拟游戏的画面视角切换为第二虚拟角色的角色视角,并显示第二虚拟角色对应的执行预测信息。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,装置还包括以下至少之一:
第三显示单元,用于在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,显示至少一个模拟对象的对局基础信息,其中,对局基础信息为至少一个模拟对象中的各个模拟对象的基础信息;
第四显示单元,用于在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,显示目标虚拟游戏在第一时间单位对应的对局即时信息,其中,对局即时信息为目标虚拟游戏在第一时间单位运行时产出的即时信息;
第五显示单元,用于在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,显示目标虚拟游戏在第一时间单位对应的对局历史信息,其中,对局历史信息为目标虚拟游戏在第一时间单位运行之前产出的历史信息;
第六显示单元,用于在显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,显示目标虚拟游戏在第一时间单位对应的对局预测信息,其中,对局预测信息为至少一个模拟对象参与目标虚拟游戏的对局结果的预测信息。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第六显示单元,包括:
第一获取模块,用于获取目标虚拟游戏在运行过程中的对局画面,其中,对局画面包括运行画面;
第二获取模块,用于利用对局画面获取局部对局状态信息和整体对局状态信息,其中,局部对局状态信息用于表示参与目标虚拟游戏的每个虚拟角色在目标虚拟游戏中的对局状态,整体对局状态信息用于表示目标虚拟游戏在第一时间单位的对局状态;
第一输入模块,用于基于局部对局状态信息,通过对局预测模型得到第一识别结果,以及基于整体对局状态信息,通过对局预测模型得到第二识别结果,其中,第一识别结果用于表示参与目标虚拟游戏的每个虚拟角色对对局结果的贡献,第二识别结果用于表示目标虚拟游戏在第一时间单位的对局状态对对局结果的贡献;
拟合模块,用于对第一识别结果和第二识别结果进行拟合,得到评估函数值,其中,评估函数值用于从对象整体以及局部性能评估至少一个模拟对象在目标虚拟游戏中的对局进度;
第十二显示模块,用于基于评估函数值获取并显示对局预测信息。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第十二显示模块,包括:
确定子模块,用于若目标虚拟游戏为至少两个对立阵营的模拟对象参与的虚拟游戏,利用评估函数值确定至少两个对立阵营中各个对立阵营参与目标虚拟游戏的预测剩余时长;
获取子模块,用于基于预测剩余时长获取各个对立阵营对目标虚拟游戏的预测胜率,其中,预测胜率与预测剩余时长呈反比;
第三显示子模块,用于将预测胜率作为对局预测信息进行显示。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,获取单元2104,包括:第二输入模块,用于基于运行画面,通过图像识别模型中的第一网络结构得到运行画面的图像特征,其中,对局参考数据包括 图像特征,图像识别模型为利用样本数据进行训练的、用于识别图像的神经网络模型,第一网络结构用于提取图像特征;
第二显示单元2106,包括:第三输入模块,用于基于运行画面的图像特征,通过图像识别模型中的第二网络结构得到识别结果,其中,执行预测信息包括识别结果。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,第二输入模块,包括:
识别子模块,用于利用第一网络结构中的卷积层,对运行画面进行图像识别,得到运行画面对应的至少两个画面特征;
串联子模块,用于利用第一网络结构中的全连接层,对至少两个画面特征进行特征串联,得到图像特征。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
在一种可能的实现方式中,串联子模块,包括:
映射子单元,用于将图像特征映射至第二网络结构中的注意力机制层,得到映射结果;
输入子单元,用于基于映射结果,通过第二网络结构中的输出层得到识别结果。
具体实施例可以参考上述信息处理方法中所示示例,本示例中在此不再赘述。
根据本申请实施例的又一个方面,还提供了一种用于实施上述信息处理方法的电子设备,如图22所示,该电子设备包括存储器2202和处理器2204,该存储器2202中存储有计算机程序,该处理器2204被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
在一种可能的实现方式中,在本实施例中,上述电子设备可以位于计算机网络的多个网络设备中的至少一个网络设备。
在一种可能的实现方式中,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,显示目标虚拟游戏在第一时间单位对应的运行画面,其中,目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与目标虚拟游戏的虚拟对象;
S2,获取运行画面对应的对局参考数据,其中,对局参考数据为虚拟角色在第一时间单位参与目标虚拟游戏时反馈出的对局数据;
S3,基于对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,待执行的候选操作是有待于被虚拟角色在第二时间单位执行的操作,执行预测信息用于为待发起的操控指令提供与对局参考数据相关的辅助参考,待发起的操控指令为有待于被模拟对象在第二时间单位发起的,且用于操控虚拟角色执行候选操作的指令,第二时间单位在第一时间单位之后。
在一种可能的实现方式中,本领域普通技术人员可以理解,图22所示的结构仅为示意,电子设备也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图22其并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图22中所示更多或者更少的组件(如网络接口等),或者具有与图22所示不同的配置。
其中,存储器2202可用于存储软件程序以及模块,如本申请实施例中的信息处理方法和装置对应的程序指令/模块,处理器2204通过运行存储在存储器2202内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的信息处理方法。存储器2202可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器2202可进一步包括相对于处理器2204远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器2202具体可以但不限于用于存储运行画面、对局参考数据以及执行预测信息等信息。作为一种示例,如图22所示,上述存储器2202中可以但不限于包括上述信息处理装置中的第一显示单元2102、获取单元2104及第二显示单元2106。此外,还可以包括但不限于上述信息处理装置中的其他模块单元,本示例中不再赘述。
在一种可能的实现方式中,上述的传输装置2206用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置2206包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置2206为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子设备还包括:显示器2208,用于显示上述运行画面、对局参考数据以及执行预测信息等信息;和连接总线2210,用于连接上述电子设备中的各个模块部件。
在其他实施例中,上述终端设备或者服务器可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成点对点(Peer To Peer,简称P2P)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。
根据本申请的一个方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理器执行时,执行本申请实施例提供的各种功能。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,电子设备的计算机系统仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
计算机系统包括中央处理器(Central Processing Unit,CPU),其可以根据存储在只读存储器(Read-Only Memory,ROM)中的程序或者从存储部分加载到随机访问存储器(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器中,还存储有系统操作所需的各种程序和数据。中央处理器、在只读存储器以及随机访问存储器通过总线彼此相连。输入/输出接口(Input/Output接口,即I/O接口)也连接至总线。
以下部件连接至输入/输出接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至输入/输出接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理器执行时,执行本申请的系统中限定的各种功能。
根据本申请的一个方面,提供了一种计算机可读存储介质,计算机设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该计算机设备执行上述各种可选实现方式中提供的方法。
在一种可能的实现方式中,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (19)

  1. 一种信息处理方法,所述方法由电子设备执行,包括:
    显示目标虚拟游戏在第一时间单位对应的运行画面,其中,所述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,所述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与所述目标虚拟游戏的虚拟对象;
    获取所述运行画面对应的对局参考数据,其中,所述对局参考数据为所述虚拟角色在所述第一时间单位参与所述目标虚拟游戏时反馈出的对局数据;
    基于所述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,所述待执行的候选操作是有待于被所述虚拟角色在第二时间单位执行的操作,所述执行预测信息用于为待发起的操控指令提供与所述对局参考数据相关的辅助参考,所述待发起的操控指令为有待于被所述模拟对象在所述第二时间单位发起的,且用于操控所述虚拟角色执行所述候选操作的指令,所述第二时间单位在所述第一时间单位之后。
  2. 根据权利要求1所述的方法,所述执行预测信息为第一概率分布信息,所述显示待执行的候选操作对应的执行预测信息,包括:
    显示待执行的至少两个候选操作的第一概率分布信息,其中,所述第一概率分布信息用于预测所述虚拟角色在所述第二时间单位执行所述至少两个候选操作中各个候选操作的概率。
  3. 根据权利要求1所述的方法,所述执行预测信息为第二概率分布信息,所述显示待执行的候选操作对应的执行预测信息,包括:
    显示所述虚拟角色对至少两个指向对象执行所述候选操作的第二概率分布信息,其中,所述第二概率分布信息用于预测所述虚拟角色在所述第二时间单位对所述至少两个指向对象中各个指向对象执行所述候选操作的概率。
  4. 根据权利要求1-3任一项所述的方法,所述显示目标虚拟游戏在第一时间单位对应的运行画面,包括:
    在观战界面中的第一界面区域内显示所述运行画面;
    所述显示待执行的候选操作对应的执行预测信息,包括:
    在所述观战界面中的第二界面区域内显示所述执行预测信息。
  5. 根据权利要求4所述的方法,所述运行画面包括运行主画面和运行子画面,所述在观战界面中的第一界面区域内显示所述运行画面,包括:
    在所述第一界面区域中的第一子区域内显示所述目标虚拟游戏在所述第一时间单位对应的运行主画面、以及在所述第一界面区域中的第二子区域内显示所述目标虚拟游戏在所述第一时间单位对应的运行子画面,其中,所述运行主画面为所述目标虚拟游戏的虚拟场景内的实时画面,所述运行子画面为所述虚拟场景的缩略画面;
    所述方法还包括:
    在所述运行子画面显示所述执行预测信息;
    所述在所述观战界面中的第二界面区域内显示所述执行预测信息,包括:
    在所述第二界面区域中的第三子区域内显示所述执行预测信息。
  6. 根据权利要求5所述的方法,所述缩略画面上显示有所述虚拟角色的角色位置标识,所述执行预测信息包括移动方向标识,所述在所述运行子画面显示所述执行预测信息,包括:
    在所述运行子画面中所述角色位置标识的关联位置处显示所述移动方向标识,其中,所述移动方向标识用于为所述模拟对象在所述第二时间单位待发起的移动指令提供方向参考,所述移动指令用于指示操控所述虚拟角色进行移动。
  7. 根据权利要求5所述的方法,所述缩略画面上显示有所述虚拟角色的角色位置标识,所述执行预测信息包括操作轨迹标识,所述在所述运行子画面显示所述执行预测信息,包括:
    在所述操作轨迹标识指示的目标候选操作的数量达到预设阈值的情况下,在所述运行子画面中目标角色位置标识的关联位置处突出显示所述目标候选操作关联的操作轨迹标识,其中,所述操作轨迹标识用于为所述模拟对象在所述第二时间单位待发起的操控指令提供指向参考,所述目标候选操作为指向对象相同的候选操作,所述目标角色位置标识为待在所述第二时间单位发起所述目标候选操作的虚拟角色对应的角色位置标识。
  8. 根据权利要求1所述的方法,所述显示待执行的候选操作对应的执行预测信息,包括以下至少之一:
    若所述待执行的候选操作为待执行的指向操作,显示所述待执行的指向操作对应的执行预测信息,其中,所述待执行的指向操作对应的执行预测信息用于为所述模拟对象在所述第二时间单位待发起的操控指令提供指向参考,所述指向操作用于确定所述操控指令的指向对象;
    若所述待执行的候选操作为待执行的至少两个候选操作,显示所述待执行的至少两个候选操作对应的执行预测信息,其中,所述待执行的至少两个候选操作对应的执行预测信息用于为所述模拟对象在所述第二时间单位待发起的至少两个操控指令提供选定参考,所述至少两个操控指令中的操控指令与所述待执行的至少两个候选操作中的候选操作一一对应;
    若所述待执行的候选操作为待执行的移动操作,显示所述待执行的移动操作对应的执行预测信息,其中,所述待执行的移动操作对应的执行预测信息用于为所述模拟对象在所述第二时间单位待发起的移动指令提供方向参考;
    若所述待执行的候选操作为待执行的攻击操作,显示所述待执行的攻击操作对应的执行预测信息,其中,所述待执行的攻击操作对应的执行预测信息用于为所述模拟对象在所述第二时间单位待发起的攻击指令提供指向参考;
    若所述待执行的候选操作为待执行的配置操作,显示所述待执行的配置操作对应的执行预测信息,其中,所述待执行的配置操作对应的执行预测信息用于为所述模拟对象在所述第二时间单位待发起的配置指令提供指向参考,所述配置操作用于确定所述配置指令的指向道具。
  9. 根据权利要求1-8任一项所述的方法,所述显示待执行的候选操作对应的执行预测信息,包括:
    在所述目标虚拟游戏的画面视角为第一虚拟角色的角色视角的情况下,显示所述第一虚拟角色对应的执行预测信息;
    响应于对所述目标虚拟游戏的画面视角的切换指令,将所述目标虚拟游戏的画面视角切换为第二虚拟角色的角色视角,并显示所述第二虚拟角色对应的执行预测信息。
  10. 根据权利要求1-9任一项所述的方法,在所述显示目标虚拟游戏在第一时间单位对应的运行画面的过程中,所述方法还包括以下至少之一:
    显示所述至少一个模拟对象的对局基础信息,其中,所述对局基础信息为所述至少一个模拟对象中的各个模拟对象的基础信息;
    显示所述目标虚拟游戏在所述第一时间单位对应的对局即时信息,其中,所述对局即时信息为所述目标虚拟游戏在所述第一时间单位运行时产出的即时信息;
    显示所述目标虚拟游戏在所述第一时间单位对应的对局历史信息,其中,所述对局历史信息为所述目标虚拟游戏在所述第一时间单位运行之前产出的历史信息;
    显示所述目标虚拟游戏在所述第一时间单位对应的对局预测信息,其中,所述对局预测信息为所述至少一个模拟对象参与所述目标虚拟游戏的对局结果的预测信息。
  11. 根据权利要求10所述的方法,所述显示所述目标虚拟游戏在所述第一时间单位对应的对局预测信息,包括:
    获取所述目标虚拟游戏在运行过程中的对局画面,其中,所述对局画面包括所述运行画面;
    利用所述对局画面获取局部对局状态信息和整体对局状态信息,其中,所述局部对局状态信息用于表示参与所述目标虚拟游戏的每个虚拟角色在所述目标虚拟游戏中的对局状态,所述整体对局状态信息用于表示所述目标虚拟游戏在所述第一时间单位的对局状态;
    基于所述局部对局状态信息,通过对局预测模型得到第一识别结果,以及基于所述整体对局状态信息,通过所述对局预测模型得到第二识别结果,其中,所述第一识别结果用于表示参与所述目标虚拟游戏的每个虚拟角色对所述对局结果的贡献,所述第二识别结果用于表示所述目标虚拟游戏在所述第一时间单位的对局状态对所述对局结果的贡献;
    对所述第一识别结果和所述第二识别结果进行拟合,得到评估函数值,其中,所述评估函数值用于从对象整体以及局部性能评估所述至少一个模拟对象在所述目标虚拟游戏中的对局进度;
    基于所述评估函数值获取并显示所述对局预测信息。
  12. 根据权利要求11所述的方法,所述基于所述评估函数值获取并显示所述对局预测信息,包括:
    若所述目标虚拟游戏为至少两个对立阵营的模拟对象参与的虚拟游戏,利用所述评估函数值确定所述至少两个对立阵营中各个对立阵营参与所述目标虚拟游戏的预测剩余时长;
    基于所述预测剩余时长获取所述各个对立阵营对所述目标虚拟游戏的预测胜率,其中,所述预测胜率与所述预测剩余时长呈反比;
    将所述预测胜率作为所述对局预测信息进行显示。
  13. 根据权利要求1-12任一项所述的方法,所述获取所述运行画面对应的对局参考数据,包括:
    基于所述运行画面,通过图像识别模型中的第一网络结构得到所述运行画面的图像特征,其中,所述对局参考数据包括所述图像特征,所述图像识别模型为利用样本数据进行训练的、用于识别图像的神经网络模型;
    所述基于所述对局参考数据,显示待执行的候选操作对应的执行预测信息,包括:
    基于所述运行画面的图像特征,通过所述图像识别模型中的第二网络结构得到识别结果,其中,所述执行预测信息包括所述识别结果。
  14. 根据权利要求13所述的方法,所述基于所述运行画面,通过图像识别模型中的第一网络结构得到所述运行画面的图像特征,包括:
    利用所述第一网络结构中的卷积层,对所述运行画面进行图像识别,得到所述运行画面对应的至少两个画面特征;
    利用所述第一网络结构中的全连接层,对所述至少两个画面特征进行特征串联,得到所述图像特征。
  15. 根据权利要求13或14所述的方法,所述基于所述运行画面的图像特征,通过所述图像识别模型中的第二网络结构得到识别结果,包括:
    将所述图像特征映射至所述第二网络结构中的注意力机制层,得到映射结果;
    基于所述映射结果,通过所述第二网络结构中的输出层得到所述识别结果。
  16. 一种信息处理装置,所述装置部署在电子设备上,包括:
    第一显示单元,用于显示目标虚拟游戏在第一时间单位对应的运行画面,其中,所述目标虚拟游戏为至少一个模拟对象参与的虚拟游戏,所述模拟对象为人工智能驱动的、用于模拟操控虚拟角色参与所述目标虚拟游戏的虚拟对象;
    获取单元,用于获取所述运行画面对应的对局参考数据,其中,所述对局参考数据为所述虚拟角色在所述第一时间单位参与所述目标虚拟游戏时反馈出的对局数据;
    第二显示单元,用于基于所述对局参考数据,显示待执行的候选操作对应的执行预测信息,其中,所述待执行的候选操作是有待于被所述虚拟角色在第二时间单位执行的操作,所述执行预测信息用于为待发起的操控指令提供与所述对局参考数据相关的辅助参考,所述待发起的操控指令为有待于被所述模拟对象在所述第二时间单位发起的,且用于操控所述虚拟角色执行所述候选操作的指令,所述第二时间单位在所述第一时间单位之后。
  17. 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的计算机程序,其中,所述计算机程序可被电子设备运行时执行所述权利要求1至15任一项中所述的方法。
  18. 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至15任一项中所述方法的步骤。
  19. 一种电子设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至15任一项中所述的方法。
PCT/CN2023/089654 2022-06-23 2023-04-21 信息处理方法、装置和存储介质及电子设备 WO2023246270A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210719475.0A CN116999823A (zh) 2022-06-23 2022-06-23 信息显示方法、装置和存储介质及电子设备
CN202210719475.0 2022-06-23

Publications (1)

Publication Number Publication Date
WO2023246270A1 true WO2023246270A1 (zh) 2023-12-28

Family

ID=88567812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/089654 WO2023246270A1 (zh) 2022-06-23 2023-04-21 信息处理方法、装置和存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN116999823A (zh)
WO (1) WO2023246270A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180083703A (ko) * 2017-01-13 2018-07-23 주식회사 엔씨소프트 인공 신경망을 기반으로 한 대전형 게임 캐릭터의 의사 결정 방법 및 이를 위한 컴퓨터 프로그램
CN111228813A (zh) * 2020-01-22 2020-06-05 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质
CN111450531A (zh) * 2020-03-30 2020-07-28 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、电子设备以及存储介质
CN113633968A (zh) * 2021-08-11 2021-11-12 网易(杭州)网络有限公司 一种游戏中的信息展示方法、装置、电子设备及存储介质
CN113941149A (zh) * 2021-09-26 2022-01-18 网易(杭州)网络有限公司 游戏行为数据的处理方法、非易失性存储介质及电子装置
CN113952723A (zh) * 2021-10-29 2022-01-21 北京市商汤科技开发有限公司 一种游戏中的交互方法、装置、计算机设备及存储介质
CN113975824A (zh) * 2021-10-19 2022-01-28 腾讯科技(深圳)有限公司 游戏观战的提醒方法以及相关设备
CN113996063A (zh) * 2021-10-29 2022-02-01 北京市商汤科技开发有限公司 游戏中虚拟角色的控制方法、装置及计算机设备
CN114618157A (zh) * 2022-03-08 2022-06-14 网易(杭州)网络有限公司 一种游戏中的数据补偿方法、装置、电子设备及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180083703A (ko) * 2017-01-13 2018-07-23 주식회사 엔씨소프트 인공 신경망을 기반으로 한 대전형 게임 캐릭터의 의사 결정 방법 및 이를 위한 컴퓨터 프로그램
CN111228813A (zh) * 2020-01-22 2020-06-05 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质
CN111450531A (zh) * 2020-03-30 2020-07-28 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、电子设备以及存储介质
CN113633968A (zh) * 2021-08-11 2021-11-12 网易(杭州)网络有限公司 一种游戏中的信息展示方法、装置、电子设备及存储介质
CN113941149A (zh) * 2021-09-26 2022-01-18 网易(杭州)网络有限公司 游戏行为数据的处理方法、非易失性存储介质及电子装置
CN113975824A (zh) * 2021-10-19 2022-01-28 腾讯科技(深圳)有限公司 游戏观战的提醒方法以及相关设备
CN113952723A (zh) * 2021-10-29 2022-01-21 北京市商汤科技开发有限公司 一种游戏中的交互方法、装置、计算机设备及存储介质
CN113996063A (zh) * 2021-10-29 2022-02-01 北京市商汤科技开发有限公司 游戏中虚拟角色的控制方法、装置及计算机设备
CN114618157A (zh) * 2022-03-08 2022-06-14 网易(杭州)网络有限公司 一种游戏中的数据补偿方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN116999823A (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
CN108463273B (zh) 基于游戏者的移动历史来进行非游戏者角色的路径寻找的游戏系统
CN109999496B (zh) 虚拟对象的控制方法、装置和电子装置
TWI796844B (zh) 投票結果的顯示方法、裝置、設備、儲存媒體及程式產品
TWI818343B (zh) 虛擬場景的適配顯示方法、裝置、電子設備、儲存媒體及電腦程式產品
CN111773682B (zh) 射击方向的提示方法、装置、电子设备和存储介质
CN111450534B (zh) 一种标签预测模型的训练方法、标签预测的方法及装置
US20210402301A1 (en) Server-Based Mechanics Help Determination from Aggregated User Data
CN113350779A (zh) 游戏虚拟角色动作控制方法及装置、存储介质及电子设备
CN110325965B (zh) 虚拟场景中的对象处理方法、设备及存储介质
WO2023024762A1 (zh) 人工智能对象控制方法、装置、设备及存储介质
CN114728205A (zh) 游戏活动的基于服务器的个人游戏时间估计
US11673051B2 (en) Server-based generation of a help map in a video game
WO2023246270A1 (zh) 信息处理方法、装置和存储介质及电子设备
CN116688526A (zh) 虚拟角色的互动方法、装置、终端设备及存储介质
CN113730910A (zh) 游戏中虚拟装备处理方法、装置和电子设备
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
US11992762B2 (en) Server-based generation of a help map in a video game
US20230041552A1 (en) Relevancy-based video help in a video game
CN115089968A (zh) 一种游戏中的操作引导方法、装置、电子设备及存储介质
CN115944916A (zh) 音效确定方法、装置、电子设备和存储介质
CN116617665A (zh) 虚拟角色控制方法、装置、电子设备及存储介质
CN116983639A (zh) 虚拟对象的控制方法、装置、设备及存储介质
CN117224947A (zh) Ar交互方法、装置、电子设备及计算机可读存储介质
CN116943204A (zh) 虚拟对象的控制方法、装置和存储介质及电子设备
CN116747519A (zh) 游戏技能的处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23825923

Country of ref document: EP

Kind code of ref document: A1