CN113975812A - Game image processing method, device, equipment and storage medium - Google Patents

Game image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113975812A
CN113975812A CN202111224842.1A CN202111224842A CN113975812A CN 113975812 A CN113975812 A CN 113975812A CN 202111224842 A CN202111224842 A CN 202111224842A CN 113975812 A CN113975812 A CN 113975812A
Authority
CN
China
Prior art keywords
virtual character
game
game scene
skill
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111224842.1A
Other languages
Chinese (zh)
Inventor
徐侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111224842.1A priority Critical patent/CN113975812A/en
Publication of CN113975812A publication Critical patent/CN113975812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a game image processing method, a game image processing device and a storage medium, wherein the method comprises the following steps: acquiring characteristic information of a second virtual character which is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range in the game scene image; acquiring a skill drop point position of a first virtual character in a game scene image; the neural network model is trained according to the game scene image, the feature information of the second virtual character and the skill drop point position, wherein the feature information of the second virtual character and the skill drop point position are used as label data of the game scene image, and the technical problem of how to introduce artificial intelligent players into the asymmetric competitive game is solved.

Description

Game image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of games, and in particular, to a method, an apparatus, a device, and a storage medium for processing game images.
Background
An asymmetric competition type (ABA) game refers to a game in which the number of people, resources, information, rules and the like are unequal between two opponents in the game process, such as a game of catching and hiding.
Those skilled in the art of network gaming have come to expect that Artificial Intelligence (AI) players can be introduced into ABA games to familiarize novice players with the game mechanics in ABA games through the confrontation of AI players and novice players. However, the person skilled in the art does not further indicate how to introduce the technical solution of the AI player in the ABA game, i.e. the person skilled in the art does not further indicate how to implement the game behavior of the AI player.
Therefore, how to introduce an AI player into an ABA game is called a technical problem to be solved urgently.
Disclosure of Invention
In the prior art, the technology of introducing AI players into an ABA mode network game is less, and the realization is difficult, but the technology of introducing AI players into a symmetric competitive game is mature. The introduction of AI players in a symmetric competitive game is usually achieved by training AI players using expert rules systems such as finite state machines or behavioral trees. If the AI player is introduced into the ABA mode network game according to the technique of introducing the AI player into the symmetric competitive game, the trained AI player cannot show diversification in the ABA game due to the limitation of an expert rule system, so that the AI player can be very easily identified by a real player, the game experience of the real player is reduced, and the anthropomorphic degree of the trained AI player is low. Therefore, the prior art has the problems of how to introduce an AI player in an ABA game and ensuring that the anthropomorphic degree of the introduced AI player is higher.
The embodiment of the application provides a game image processing method, device, equipment and storage medium, which are used for solving the problems that how to introduce an AI player in an ABA game and the anthropomorphic degree of the introduced AI player is high.
In a first aspect, an embodiment of the present application provides a method for processing a game image, where the method includes: acquiring characteristic information of a second virtual character which is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range in the game scene image; acquiring a skill drop point position of a first virtual character in a game scene image; and training the neural network model according to the game scene image, the characteristic information of the second virtual character and the skill drop point position, wherein the characteristic information of the second virtual character and the skill drop point position are used as label data of the game scene image.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: by acquiring the characteristic information of the second virtual character and the skill drop point position of the first virtual character and using the characteristic information as a real label to train the neural network model, the supervised learning model can be obtained, so that the AI player obtained by the supervised learning can have the effect of high personification degree, and the aims of introducing the AI player in the ABA game and ensuring higher personification degree of the AI player are fulfilled.
In one possible implementation, the acquiring feature information of a second virtual character located in the visual field range of the first virtual character and closest to the center line position of the visual field range in the game scene image includes: determining a target virtual character in a game scene image; acquiring a plurality of target points on a target virtual role; performing ray detection on the first virtual character and a plurality of target points, and determining whether a collision body exists between the first virtual character and the target points; if no collision body exists between the first virtual character and the at least one target point, determining that the target virtual character is in the visual field range; acquiring the distance between each target virtual character in the visual field and the middle line position of the visual field; determining a target virtual character with the minimum distance from the center line position of the visual field range as a second virtual character; and acquiring the characteristic information of the second virtual role.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the second virtual character to be selected by the first virtual character is required to be in the visual field range of the second virtual character to determine all targets positioned in the visual field range of the first virtual object, so that the obtained AI player can not attack targets outside the visual field range, and anti-human behaviors can not occur; moreover, the first virtual character does not attack a plurality of second virtual characters at different positions with long distances at the same time, so that one second virtual character needs to be determined in the visual field range, and for a real player, the second virtual character to be selected is generally a target positioned in the middle of the visual field of the real player, and therefore the second virtual character to be selected by the first virtual character is a target positioned in the middle of the visual field of the real player, so that the personification degree of the AI player can be improved, and the trained AI player is prevented from having anti-human behaviors.
In one possible embodiment, the method further comprises: and if collision bodies exist between the first virtual character and the target points, determining that the target virtual character is not in the visual field range.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: for the second virtual character which is not in the visual field range of the first virtual character, the second virtual character can be ignored and not processed, so that the problem of anti-human behavior caused by the attack of the AI player on the target outside the visual field range can be avoided, and the personification degree of the AI player can be improved.
In one possible implementation, acquiring a skill drop point position of a first virtual character in a game scene image comprises: dividing a game scene map where the game scene images are located into a plurality of areas, wherein the shapes and the sizes of the areas are the same; determining a landing area of the game skill of the first virtual character in the plurality of areas; and determining the drop point area as the skill drop point position.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: if the position of the game skill of the first virtual character in the game scene map is directly used as the position of the game skill drop point, the training neural network model evolves into a regression model due to the infinite number of the position of the game skill in the game scene map, so that the training difficulty of the neural network model is increased, the game scene map is divided into a plurality of areas, and the divided areas are limited, so that the training neural network model evolves into a classification model by taking the position of the game skill drop point as the position of the game skill drop point, the training difficulty of the neural network model can be reduced, the training time of AI players is reduced, and the updating speed of the game is increased.
In one possible implementation, training the neural network model according to the game scene image, the feature information of the second virtual character and the skill drop point position comprises: and training the neural network model according to the digital characteristics and the image characteristics, wherein the digital characteristics comprise characteristic information and skill drop point positions of the second virtual role, and the image characteristics comprise game scene images and a game scene map where the game scene images are located.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the digital features include feature information of the second virtual character, feature information of the skill placement position, and feature information of other games, such as feature information of the first virtual character, environment features and the like, so that the trained AI player can be more fit with a real player, and the personification degree of the AI player is improved.
In one possible embodiment, training the neural network model based on the digital features and the image features includes: carrying out one-dimensional processing on the image characteristics to obtain a plurality of one-dimensional arrays; fusing the plurality of one-dimensional arrays and the digital characteristics to obtain a target array; and inputting the target array into the neural network model for classification processing to obtain the characteristic information and the skill drop point position of the second virtual role.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: by training the neural network model in the mode, the personification degree of the trained AI player can be improved, and on the premise of realizing the AI player with the same level, the technical scheme can reduce training time compared with other reinforcement learning, so that the updating speed of the game is improved.
In one possible implementation, before the one-dimensional processing is performed on the image feature, the method further includes: and processing the image features by adopting a convolutional neural network.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the image features are processed through a Convolutional Neural Network (CNN), so that some local features in the image can be more focused, the calculation amount of image processing is reduced, the training time of an AI player is shortened, and the game updating speed can be further improved.
In one possible embodiment, the digital feature further comprises: the feature information of the first virtual character, the environmental feature of the game scene image, and the map feature of the game scene map.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the digital features include feature information of the second virtual character, feature information of the skill drop point position, and feature information of other games, such as feature information of the first virtual character, environment features of a game scene image, map features of a game scene map, and the like, so that the trained AI player can better fit the real player, and the personification degree of the AI player is improved.
In a second aspect, an embodiment of the present application provides a method for processing a game image, where a terminal device provides a graphical user interface, the graphical user interface includes a game scene image, and the game scene image includes a plurality of virtual characters, the method includes: and inputting the game scene image into the trained neural network model for processing to obtain the characteristic information of a second virtual character in the virtual characters and the skill drop point position of a first virtual character in the virtual characters, wherein the second virtual character is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: after the neural network model is trained, an AI player with high personification degree can be obtained only by inputting a game scene image into the trained neural network model, the AI player can play games according to the mode that a real player selects a second virtual object and releases game skills in the game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
In one possible embodiment, before inputting the game scene image into the trained neural network model for processing, the method further comprises: and training the neural network model according to the game scene image, the characteristic information of the second virtual character and the skill placement position to obtain the trained neural network model, wherein the characteristic information of the second virtual character and the skill placement position are used as label data of the game scene image.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: by acquiring the characteristic information of the second virtual character and the skill drop point position of the first virtual character and using the characteristic information as a real label to train the neural network model, the supervised learning model can be obtained, so that the AI player obtained by the supervised learning can have the effect of high personification degree, and the aims of introducing the AI player in the ABA game and ensuring higher personification degree of the AI player are fulfilled.
In a third aspect, an embodiment of the present application provides a game image processing apparatus, including: the first acquisition module is used for acquiring the characteristic information of a second virtual character which is positioned in the visual field range of the first virtual character and is closest to the center line position of the visual field range in the game scene image; the second acquisition module is used for acquiring the skill drop point position of the first virtual character in the game scene image; and the training module is used for training the neural network model according to the game scene image, the characteristic information of the second virtual character and the skill placement position, wherein the characteristic information of the second virtual character and the skill placement position are used as label data of the game scene image.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: by acquiring the characteristic information of the second virtual character and the skill drop point position of the first virtual character and using the characteristic information as a real label to train the neural network model, the supervised learning model can be obtained, so that the AI player obtained by the supervised learning can have the effect of high personification degree, and the aims of introducing the AI player in the ABA game and ensuring higher personification degree of the AI player are fulfilled.
In a fourth aspect, an embodiment of the present application provides a game image processing method, including: the game system comprises a processing module and a display module, wherein the display module is used for displaying a graphical user interface, the graphical user interface comprises a game scene image, and the game scene image comprises a plurality of virtual roles; the processing module is used for inputting the game scene image into the trained neural network model for processing to obtain the characteristic information of a second virtual character in the virtual characters and the skill drop point position of a first virtual character in the virtual characters, wherein the second virtual character is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: after the neural network model is trained, an AI player with high personification degree can be obtained only by inputting a game scene image into the trained neural network model, the AI player can play games according to the mode that a real player selects a second virtual object and releases game skills in the game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
In a fifth aspect, an embodiment of the present application provides a training apparatus, including: a processor, a memory, a display; the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the game image processing method of the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the terminal equipment obtains the characteristic information of the second virtual character and the skill drop point position of the first virtual character as real labels to train the neural network model and obtain a supervised learning model, so that an AI player obtained by supervised learning can have the effect of high personification degree, the aim of introducing the AI player in the ABA game is fulfilled, and the aim of ensuring higher personification degree of the AI player is fulfilled; and after the neural network model is trained, only a game scene image needs to be input into the trained neural network model, an AI player with high personification degree can be obtained, the AI player can play according to a mode that a real player selects a second virtual object and releases game skills in a game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
In a sixth aspect, an embodiment of the present application provides a terminal device, including: a processor, a memory, a display; the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the game image processing method of the second aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the terminal equipment obtains the characteristic information of the second virtual character and the skill drop point position of the first virtual character as real labels to train the neural network model and obtain a supervised learning model, so that an AI player obtained by supervised learning can have the effect of high personification degree, the aim of introducing the AI player in the ABA game is fulfilled, and the aim of ensuring higher personification degree of the AI player is fulfilled; and after the neural network model is trained, only a game scene image needs to be input into the trained neural network model, an AI player with high personification degree can be obtained, the AI player can play according to a mode that a real player selects a second virtual object and releases game skills in a game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for processing a game image of the first aspect or the second aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the terminal equipment obtains the characteristic information of the second virtual character and the skill drop point position of the first virtual character as real labels to train the neural network model and obtain a supervised learning model, so that an AI player obtained by supervised learning can have the effect of high personification degree, the aim of introducing the AI player in the ABA game is fulfilled, and the aim of ensuring higher personification degree of the AI player is fulfilled; and after the neural network model is trained, only a game scene image needs to be input into the trained neural network model, an AI player with high personification degree can be obtained, the AI player can play according to a mode that a real player selects a second virtual object and releases game skills in a game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
In an eighth aspect, the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements the method for processing a game image of the first aspect or the second aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the terminal equipment obtains the characteristic information of the second virtual character and the skill drop point position of the first virtual character as real labels to train the neural network model and obtain a supervised learning model, so that an AI player obtained by supervised learning can have the effect of high personification degree, the aim of introducing the AI player in the ABA game is fulfilled, and the aim of ensuring higher personification degree of the AI player is fulfilled; and after the neural network model is trained, only a game scene image needs to be input into the trained neural network model, an AI player with high personification degree can be obtained, the AI player can play according to a mode that a real player selects a second virtual object and releases game skills in a game process, so that the second virtual character is intelligently selected and the game skills are released at a reasonable skill placement position, therefore, when a novice player and an AI player fight, the AI player is not easily identified, the personification degree of the AI player is improved, and the game experience of the novice player is also improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic structural diagram of a game image processing system according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a first embodiment of a method for processing a game image according to an embodiment of the present application;
FIG. 3 is a flowchart of a second embodiment of a method for processing a game image according to the present application;
FIG. 4 is a schematic diagram of a supervised learning model provided by an embodiment of the present application;
fig. 5 is a schematic view of a second virtual character in the field of view of a first virtual character according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a second virtual character closest to a centerline position of a visual field of the first virtual character according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a skill drop point location provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a trained neural network model provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a first embodiment of a game image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a second embodiment of a game image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a training apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be made by one skilled in the art based on the embodiments in the present application in light of the present disclosure are within the scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art provided in the background art, at least the following technical problems exist:
those skilled in the art of network gaming have come up with the idea that AI players can be introduced in ABA games to familiarize novice players with the gaming mechanisms in ABA games through the confrontation of AI players and novice players. However, those skilled in the art do not further indicate how to implement the technical solution of introducing an AI player in an ABA game, but rather, the technique of introducing an AI player in a symmetric competitive game is mature. The introduction of AI players in a symmetric competitive game is usually achieved by training AI players using expert rules systems such as finite state machines or behavioral trees. If the AI player is introduced into the ABA mode network game according to the technique of introducing the AI player into the symmetric competitive game, the trained AI player cannot show diversification in the ABA game due to the limitation of an expert rule system, so that the AI player can be very easily identified by a real player, the game experience of the real player is reduced, and the anthropomorphic degree of the trained AI player is low. Therefore, the prior art has the problems of how to introduce an AI player in an ABA game and ensuring that the anthropomorphic degree of the introduced AI player is higher.
In order to solve the problems, the application provides a game image processing method, which includes acquiring video data of a real player in a game process, analyzing the video data, and determining characteristic data of a target player closest to a central line of a visual field range within the visual field range of the real player, wherein the characteristic data is used for representing the target player closest to the central line of the visual field range within the visual field range of the real player, and the characteristic data of the target player is used as a real label; then, the game scene map is divided into a plurality of areas with the same size and shape, the falling point area of the game skill of the real player in the game scene map is determined as the skill falling point position, the skill falling point position is used as another real label, and then the neural network model is trained by adopting the video data of a large number of real players, the characteristic data of the target player and the skill falling point position, so that a supervised learning model can be obtained, and the trained AI player with high personification can be obtained. In addition, the efficiency of training AI players through a supervision learning mode is higher, and the requirement of higher game updating speed at present is met. The terms referred to in the present application are explained first below.
Graphical User Interface (GUI): the graphical user interface is a computer operation user interface displayed in a graphical mode.
A neural network model: neural Networks (NN) are complex network systems formed by a large number of simple processing units (called neurons) widely connected with each other, reflect many basic features of human brain functions, and are highly complex nonlinear dynamical learning systems. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously. Neural network models are described based on mathematical models of neurons. The neural network model is represented by network topology node characteristics and learning rules.
Convolutional Neural Networks (CNN): is a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning). The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network.
The method for processing the game image has the core idea that an AI player is obtained through training in a supervised learning mode, firstly, game video data of a large number of real players are obtained, the video data are analyzed and processed, and then tasks, scene states and events occurring at each moment can be obtained, so that feature data of a target player closest to the central line of a visual field range in the visual field range of the real players can be obtained, and the feature data of the target player is used as a real label; then, dividing the game scene map into a plurality of areas with the same size and shape, determining a point falling area of the game skill of a real player in the plurality of areas, determining the point falling area as a skill point falling position, taking the skill point falling position as another real label, and then training the neural network model by adopting video data, a target player and the skill point falling position, thereby obtaining a supervised learning model, further obtaining an AI player with high personification, and enabling the game behavior of the AI player in the game to be fit with the game behavior of the real player. Therefore, the technical scheme of the application can improve the personification degree of the AI player, improve the training speed of the AI player and improve the updating speed of the game.
In one embodiment, the game image processing method in one embodiment of the present application may be executed on a local terminal device or a server. When the game image processing method is executed on the server, the game image processing method can be implemented and executed based on a cloud interactive system, wherein the cloud interactive system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the game image processing method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In one embodiment, the game image processing method can be applied in an application scene. Fig. 1 is a schematic structural diagram of a game image processing system provided in an embodiment of the present application, and as shown in fig. 1, in this scenario, the game image processing system may include a data acquisition device 101, a database 102, a training device 103, an execution device 104, a data storage system 105, and a user device 106, where the execution device 103 includes a calculation module 107 and an I/O interface 108, and the calculation module 107 includes an object model/rule 109.
The data acquisition device 101 may be configured to acquire feature information of a second virtual character located in a visual field range of the first virtual character in the game scene image and closest to a center line position of the visual field range, acquire a skill drop point position of the first virtual character in the game scene image, and store the game scene image, the feature information of the second virtual character, and the skill drop point position of the first virtual character in the game scene image into the database 102, where the feature information of the second virtual character and the skill drop point position of the first virtual character may be used as tag data of the game scene image.
The data collection device 101 may determine the feature information of the second virtual character and the skill placement position of the first virtual character by acquiring a game scene image of a real player, where a large number of game scene images of the real player, the feature information of the second virtual character and the skill placement position of the first virtual character are stored in the database 102, and the game scene image may be a game video and/or a game picture.
The training device 103 generates an object model/rule 109 based on the game scene image in the database 102, the feature information of the second virtual character, and the skill drop point location of the first virtual character. Wherein the feature information of the second virtual character may include: role occupation, skill cooling state, role state (existing blood volume, whether hung on a rocket chair, etc.), speed of movement, position, orientation, talent, speed of treatment, etc.; the target model/rule 109 may be a supervised learning model or the like.
The training device 103 may execute the processing method of the game image in the embodiment of the present application, thereby training the target model/rule 109 for acquiring the AI player. The target models/rules 109 derived by the training device 103 may be applied in different systems or devices.
The execution device 104 is configured with an I/O interface 108, and can perform data interaction with the user device 106, and a user can input a game scene image to the I/O interface 108 through the user device 106; a calculation module 107 in the execution device 104 processes the game scene image input by the target model/rule 109, so as to obtain the feature information of the second virtual character and the skill drop point position of the first virtual character; the I/O interface 108 returns the characteristic information of the second avatar and the skill drop location of the first avatar to the user device 106 for provision to the user by the user device 106. The user may be a game developer or a game planner.
The execution device 104 may call data, code, etc. in the data storage system 105, or may store data, instructions, etc. in the data storage system 105.
In the above scenario, in one case, the user may manually input a game scenario image to the I/O interface 108 via the user device 106, for example, operating in an interface provided by the I/O interface 108; in another case, the user device 106 may automatically input the game scene image into the I/O interface 108 and obtain the feature information of the second virtual character and the skill drop point position of the first virtual character returned by the I/O interface 108. It should be noted that, if the user device 106 automatically inputs data into the I/O interface 108 and obtains a result returned by the I/O interface 108, the user device 106 needs to obtain authorization of the user, and the user may set a right to respond in the user device 106.
In the above scenario, the user device 106 may also serve as a data collection end to store a large number of collected game scene images of the real player, feature information of the second virtual character, and skill placement positions of the first virtual character in the database 102.
It should be noted that the structure of the game image processing system shown in fig. 1 is only a schematic diagram, and the positional relationship among the devices, modules, and the like shown in the figure does not constitute any limitation, for example, in fig. 1, the data storage system 105 is an external memory with respect to the execution device 104, and in other cases, the data storage system 105 may be disposed in the execution device 104; the database 102 is an external memory with respect to the training device 103, in other cases the database 102 may also be placed in the training device 103.
With reference to the above scenario, the following describes in detail a technical solution of a game image processing method provided in the present application by using several specific embodiments.
Fig. 2 is a flowchart of a first embodiment of a method for processing game images according to an embodiment of the present application, and as shown in fig. 2, the method may be performed by the training apparatus in fig. 1, and the method includes the following steps:
s201: and acquiring the characteristic information of a second virtual character which is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range in the game scene image.
In this step, a large number of game scene images of real players may be acquired by the training apparatus in fig. 1, so as to acquire feature information of the second virtual character. The game scene image comprises a plurality of virtual characters, and the virtual characters can be virtual characters controlled by a real player. When an AI player is trained, the two most critical contents are the feature information of a target object (i.e., a second virtual character) to be selected by the AI player and the skill placement position of the game skill of the AI player, and therefore, the feature information of the second virtual character in the game scene image needs to be acquired, and the acquired feature information of the second virtual character is used as a real label. The characteristic information of the second virtual character is used for characterizing the second virtual character.
In the above scheme, for a real player, in a game process, it is impossible to select a target object outside the visual field range of the real player to release game skills, and therefore, the target object to be selected must be within the visual field range of the real player, that is, the second virtual character must be within the visual field range of the first virtual character, so that the trained AI player does not attack the target object outside the visual field range, and the AI player does not have anti-human behavior. Moreover, for a real player, when releasing a certain game skill, the virtual character controlled by the real player does not attack a plurality of target objects at different positions with long distances, so that one second virtual character or a plurality of second virtual characters within the skill attack range of the first virtual character need to be determined. Meanwhile, for a real player, a target object to be selected by the real player is generally positioned in the middle of the visual field range of the real player, so that the second virtual character needs to simultaneously meet two conditions of being in the visual field range of the first virtual character and being closest to the center line position of the visual field range, the AI player obtained through training does not generate anti-human behaviors, and is more fit with the game behaviors of the real player, and therefore the AI player obtained through training has a high personification degree.
In the above scheme, the game scene image may be a game video image or a game picture image, and taking the game video image as an example, a real player records a video in a game process to obtain a video, and game data of the real player is stored in a video form. The video is essentially frame snapshot and event triggering, and people, scene states and events at each moment can be obtained only by correspondingly analyzing and processing the video.
S202: and acquiring the skill drop point position of the first virtual character in the game scene image.
In this step, when training the AI player, another part of the most critical content is the skill placement position of the game skill of the AI player. For a real player, after determining a second virtual character selected by the real player, the first virtual character needs to be controlled to release game skill for attack, so that the skill drop point position of the first virtual character in the game scene image can be acquired through the training device in fig. 1, and the acquired skill drop point position is used as another real label.
S203: and training the neural network model according to the game scene image, the characteristic information of the second virtual character and the skill drop point position.
In this step, the feature information and the skill placement position of the second virtual character are used as label data of the game scene image, and after the training device trains the neural network model according to the game scene image, the feature information and the skill placement position of the second virtual character, a supervised learning model can be obtained, so that a trained AI player can be obtained. FIG. 4 is a schematic diagram of a Supervised Learning Model according to an embodiment of the present disclosure, and as shown in FIG. 4, a training requirement (x) of the Supervised Learning Model (SL Model for short)(i),y(i)) Wherein x is(i)As a sample feature, y(i)Is y in FIG. 4targetI.e. genuine label, ypredThe result predicted by the supervised learning model is obtained. Constantly making y by an optimizertargetAnd ypredThe difference between the two is reduced, and finally the y output by the supervised learning model is outputpredIs approximately ytarget. For the first virtual character in the game, a training data (x)(i),y(i)) In (1)
Figure BDA0003313781470000151
Figure BDA0003313781470000152
Wherein,
Figure BDA0003313781470000153
is a feature of a certain dimension. The features are mainly divided into two parts, namely image features and digital features. The image features and the digital features will be described later.
In the above steps, when the trained AI player is introduced into the game, the feature information of the second virtual character to be selected by the AI player and the skill drop point position can be determined according to the game scene image, so that the AI player can reasonably determine the specific second virtual character and the skill drop point position of the game skill in the actual game process, and the trained AI player has a high personification degree.
In the above steps, when training the neural network model, a large amount of game image data of the real player may be used, so that the trained AI player may better conform to the game behavior of the real player.
In the game image processing method provided by this embodiment, the feature information of the second virtual character and the skill drop point position of the first virtual character are obtained and used as the real label to train the neural network model, so that the supervised learning model can be obtained, and thus the AI player obtained by the supervised learning can have an effect of high personification degree, thereby achieving the purposes of introducing the AI player into the ABA game and ensuring that the AI player has a high personification degree.
In one embodiment, acquiring feature information of a second virtual character in the game scene image, the second virtual character being located within the visual field of the first virtual character and closest to the center line position of the visual field, includes: determining a target virtual character in a game scene image; acquiring a plurality of target points on a target virtual role; performing ray detection on the first virtual character and a plurality of target points, and determining whether a collision body exists between the first virtual character and the target points; if no collision body exists between the first virtual character and the at least one target point, determining that the target virtual character is in the visual field range; acquiring the distance between each target virtual character in the visual field and the middle line position of the visual field; determining a target virtual character with the minimum distance from the center line position of the visual field range as a second virtual character; and acquiring the characteristic information of the second virtual role.
In this embodiment, the game scene image includes a plurality of target virtual characters in addition to a first virtual character, and the target object to be selected by the first virtual character needs to be within the visual field range of the first virtual character, so that the plurality of target virtual characters in the visual field range need to be determined first, then the target virtual character closest to the center line position of the visual field range needs to be determined from the plurality of target virtual characters within the visual field range, and the target virtual character is determined as the second virtual character.
In the above solution, when determining a target virtual character located within the visual field of the first virtual character, a plurality of target points may be selected on the target virtual character, then ray detection is performed with the first virtual character as a starting point and the plurality of target points on the target virtual character as an end point, whether each ray detects a collision volume is determined, and if any ray does not detect a collision volume, it indicates that the target virtual character is located within the visual field of the first virtual character, as shown in fig. 5. The collision body may be any object with a shielding effect, such as a building, a tree, a stone, a box, etc., in the game scene image, for example, when a certain ray detects that there is no wall shielding between the first virtual character and the target virtual character, and other rays detect that there is wall shielding between the first virtual character and the target virtual character, it indicates that the first virtual character can see at least a position of a certain target point on the target virtual character, and therefore, the target virtual character is located within the visual field range of the first virtual character.
Fig. 5 is a schematic diagram of a second virtual character in a visual field of a first virtual character according to an embodiment of the present application. In fig. 5, a game scene displayed on a current game screen is taken as a visual field range of a first virtual character, and for a computer, it is necessary to determine which virtual characters in a game scene image are within the visual field range of the first virtual character by using a ray detection method, so that 5 target points can be set on a target virtual character, then ray detection is performed respectively by taking the first virtual character as a starting point and the 5 target points as an end point, when a collision body exists between the starting point and the end point, False is returned to the computer, and when no collision body exists between the starting point and the end point, True is returned. When the computer detects that any ray returns True, the target virtual character is in the visual field range of the first virtual character, and if all rays return False, the target virtual character is not in the visual field range of the first virtual character.
In the above scheme, for a real player, when releasing a game skill, a virtual character controlled by the real player does not attack a plurality of target objects at different positions far away from the real player, and therefore, it is necessary to determine one second virtual character or determine a plurality of second virtual characters within the skill attack range of the first virtual character. Meanwhile, for a real player, a target object to be selected by the real player is generally located in the middle of the visual field range of the real player, and therefore, the second virtual character needs to simultaneously satisfy two conditions of being within the visual field range of the first virtual character and being closest to the centerline position of the visual field range, so that the trained AI player does not have anti-human behavior, and is more suitable for the game behavior of the real player, and therefore, the trained AI player has a high personification degree, as shown in fig. 6.
Fig. 6 is a schematic diagram of a second virtual character closest to a centerline position of a visual field of the first virtual character according to an embodiment of the present application. In fig. 6, taking the game scene displayed on the current game screen as the visual field range of the first virtual character as an example, the center line position of the visual field range is shown by the black solid line in fig. 6. Generally, the target object selected by the player is generally in the middle of the visual field of the player, and the situation that the target object is in the front of the player is rarely possible, but the player walks away from the target object, when the player finds that a plurality of target objects exist in the visual field of the player, the player can attack the target object in the middle of the visual field preferentially, and therefore the target object which is closest to the midline of the visual field needs to be determined. As shown in fig. 6, the distance a between the target virtual character a and the black solid line is greater than the distance B between the target virtual character B and the black solid line, and therefore, the target virtual character B is the finally determined second virtual character located within the visual field range of the first virtual character and closest to the center line position of the visual field range.
In the above scheme, if there is a collision volume between the first virtual character and each of the plurality of target points, it is determined that the target virtual character is not within the visual field. That is, if all the rays detect the collider, it is indicated that there is a certain blocking object between the target virtual character and the first virtual character, and the first virtual character cannot see any part of the target virtual character at all, so that the target virtual character is not in the visual field range of the first virtual character, and the second virtual character which is not in the visual field range of the first virtual character can be ignored and cannot be processed, so that the problem of anti-human behavior caused by the attack of the AI player on the target outside the visual field range can be avoided, and the personification degree of the AI player can be improved.
In one embodiment, acquiring a skill drop position of a first virtual character in a game scene image comprises: dividing a game scene map where the game scene images are located into a plurality of areas, wherein the shapes and the sizes of the areas are the same; determining a landing area of the game skill of the first virtual character in the plurality of areas; and determining the drop point area as the skill drop point position.
In the scheme, the game scene displayed in the game scene image is a game scene existing at a certain position on the game scene map, therefore, the game scene map displays all areas in the game, when the first virtual character releases the game skill, the skill drop point position can be any position in the game scene map, the number of the positions can be infinite, if the drop point position of the game skill of the first virtual character in the game scene map is directly used as the skill drop point position, the training neural network model can be evolved into a regression model, so that the training difficulty of the neural network model can be increased, and if the game scene map is divided into a plurality of areas, because the divided areas are finite, the training neural network model can be evolved into a classification model by taking the game skill drop point area as the skill drop point position, therefore, the training difficulty of the neural network model can be reduced, the training time of AI players is reduced, and the updating speed of the game is increased.
In the above solution, as shown in fig. 7, fig. 7 is a schematic diagram of a skill drop point location provided in an embodiment of the present application. In fig. 7, the game scene map is divided into 11 × 10 — 110 blocks, and if the game skill drop point of the first virtual character is within the black area in fig. 7, the black area is the skill drop point position of the first virtual character.
In one embodiment, training the neural network model according to the game scene image, the feature information of the second virtual character and the skill drop point position comprises: and training the neural network model according to the digital characteristics and the image characteristics, wherein the digital characteristics comprise characteristic information and skill drop point positions of the second virtual role, and the image characteristics comprise game scene images and a game scene map where the game scene images are located.
In the scheme, a deep neural network can be used as a model for supervised learning, the deep neural network mainly comprises two parts, namely digital features and image features, a game scene image and a game scene map are used for showing scene layout in a game process and can be used as the image features, feature information of a second virtual character is used for determining the second virtual character, a skill drop point position is used for determining a game skill drop point position of a first virtual character, and for a computer, specific second virtual characters and skill drop point positions of the first virtual character can be determined only through numbers, so the feature information and the skill drop point positions of the second virtual character are used as the digital features.
In the scheme, the deep neural network is used as the model for monitoring learning, so that the training process of the neural network model can be simplified, the training speed is increased, the training speed of the AI player is increased, and the updating speed of the game can be increased.
In one embodiment, training the neural network model based on the digital features and the image features comprises: carrying out one-dimensional processing on the image characteristics to obtain a plurality of one-dimensional arrays; fusing the plurality of one-dimensional arrays and the digital characteristics to obtain a target array; and inputting the target array into the neural network model for classification processing to obtain the characteristic information and the skill drop point position of the second virtual role.
In the scheme, the one-dimensional processing of the image features can be realized by using a scatter layer in a neural network, since the digital features are one-dimensional originally, the digital features do not need to be processed, the fusion processing of the obtained multiple one-dimensional arrays and the digital features can be realized by using a concat layer in the neural network, the target arrays are input into a neural network model for classification processing, and the fusion processing can be realized by using a full connection (fc) layer in the neural network model, as shown in fig. 8.
Fig. 8 is a schematic diagram of a neural network model for training provided in an embodiment of the present application, where in fig. 8, the image features include a game scene image and a game scene map, the game scene image is also a field of view of a player, and the game scene map is also a small map in a game. And expanding the image characteristics through a flatten layer of the neural network to obtain a plurality of one-dimensional arrays, then performing fusion connection on the digital characteristics and the plurality of one-dimensional arrays through a concat layer of the neural network, classifying the digital characteristics and the plurality of one-dimensional arrays through a full connection layer in the neural network, and finally outputting the characteristic information and the skill drop point position of the second virtual role through the two classified full connection layers.
In the scheme, the neural network model is trained in such a way, the personification degree of the trained AI player can be improved, and on the premise of realizing the AI player with the same level, the technical scheme can reduce the training time compared with other reinforcement learning, so that the updating speed of the game is improved.
In one embodiment, before the image feature and the digital feature are respectively subjected to the one-dimensional processing, the method further comprises: and processing the image features by adopting a convolutional neural network.
In this scheme, as shown in fig. 8, before the image feature and the digital feature are respectively subjected to the one-dimensional processing, the image feature may be processed by using a convolutional neural network. The image features are processed through the CNN, so that some local features in the image can be more concerned, the calculation amount of image processing is reduced, the training time of AI players is shortened, and the game updating speed can be further improved.
In one embodiment, the digital feature further comprises: the feature information of the first virtual character, the environmental feature of the game scene image, and the map feature of the game scene map.
In the scheme, the digital features comprise feature information of the second virtual character and skill drop point positions, and also comprise feature information in other games, such as feature information of the first virtual character, environment features of game scene images, map features of game scene maps and the like, so that the trained AI player can be more fit with a real player, and the personification degree of the AI player is improved.
For example, the digital features include local information during the game: environmental characteristics, such as the state of the cryptographic machine in the game (whether the cryptographic machine is decoded, etc.), the state of the rocket chair (whether a person is hung on the rocket chair, whether the rocket chair is damaged, etc.), the condition of a board or a window, etc.; characteristic information of the second virtual character, such as character occupation, skill cooling state, character state (existing blood volume, whether hung on a rocket chair, etc.), moving speed, position, orientation, talent, treatment speed, etc.; characteristic information of the first virtual character, such as character occupation, skill cooling state (including auxiliary skill cooling state and general attack skill cooling state), moving speed, position, orientation, talent, and the like. The digital features also include out-of-game information during the game: characteristic information of the second avatar, such as: the game information comprises character segment position information, account information, historical team formation information, historical game fighting information and the like; map features, such as: the type of the map, the location of the cellar in the map, the location of the birth point of the first virtual character and the second virtual character, the location of the cryptographic engine, the location of the rocket chair, and the like.
In the method for processing the game image provided by the embodiment, the video data of the real player in the game process is acquired and analyzed, so that the characteristic information of the target player closest to the centerline of the visual field range in the visual field range of the real player is determined, and the characteristic information of the target player is used as a real label; then, the game scene map is divided into a plurality of areas with the same size and shape, the falling point area of the game skill of the real player in the game scene map is determined as the skill falling point position, the skill falling point position is used as another real label, and then the neural network model is trained by adopting the video data of a large number of real players, the characteristic information of the target player and the skill falling point position, so that a supervised learning model can be obtained, and the trained AI player with high personification can be obtained. In addition, the efficiency of training AI players through a supervision learning mode is higher, and the requirement of higher game updating speed at present is met.
Fig. 3 is a flowchart of a second embodiment of a method for processing a game image according to an embodiment of the present application, and as shown in fig. 3, the method may be executed by the execution device in fig. 1, and the method includes the following steps:
s301: and inputting the game scene image into the trained neural network model for processing to obtain the characteristic information of a second virtual character in the virtual characters and the skill drop point position of a first virtual character in the virtual characters.
In this step, a graphical user interface may be provided by the terminal device, where the graphical user interface includes a game scene image, the game scene image may be a game video image or a game picture image, the game scene image includes a plurality of virtual characters, a first virtual character of the virtual characters is an AI player, and a second virtual character of the virtual characters may be an AI player or a virtual character controlled by a real player. The second virtual character is positioned in the visual field range of the first virtual character and is closest to the middle line position of the visual field range. The terminal device may be the execution device in fig. 1.
In the above steps, when the trained AI player is introduced into the game, the execution device may determine, according to the game scene image, the feature information of the second virtual character to be selected by the AI player and the skill drop point position, so that the AI player may determine, in an actual game process, the specific second virtual character through the feature information of the second virtual character and reasonably determine the skill drop point position of the game skill thereof, and thus, the trained AI player has a high personification degree.
In one embodiment, before inputting the game scene image into the trained neural network model for processing, the method further comprises: and training the neural network model according to the game scene image, the characteristic information of the second virtual character and the skill placement position to obtain the trained neural network model, wherein the characteristic information of the second virtual character and the skill placement position are used as label data of the game scene image.
In the scheme, the characteristic information of the second virtual character and the skill drop point position of the first virtual character are obtained and used as real labels to train the neural network model, so that the supervised learning model can be obtained, the AI player obtained by the supervised learning can have the effect of high personification degree, the AI player is introduced into the ABA game, and the aim of ensuring high personification degree of the AI player is fulfilled.
In the above scheme, the method for training the neural network model may be trained according to the method shown in fig. 2.
According to the game image processing method provided by the embodiment, the trained AI player is introduced into the game, the AI player can reasonably select the second virtual character and the skill drop position of the game skill of the AI player through the game scene image, so that the AI player has high personification, the efficiency of training the AI player through a supervised learning mode is high, and the requirement of high game updating speed at present is met.
In summary, according to the technical scheme provided by the application, the deep neural network is used as a model for supervised learning, and the neural network model is trained according to the two real labels of the characteristic information and the skill drop point position of the second virtual character and the game scene image, so that the problems that intensive learning needs a large amount of calculation to try and error, and generalization performance is weak are solved, and the method is a technical implementation method which can enable an AI player obtained through training to have a high personification degree, can improve the training speed of the AI player and can improve the game updating speed.
Fig. 9 is a schematic structural diagram of a first embodiment of a game image processing apparatus according to an embodiment of the present application, where the game image processing apparatus 900 includes:
a first obtaining module 901, configured to obtain feature information of a second virtual character, which is located in a view range of the first virtual character and closest to a center line position of the view range, in the game scene image;
the second obtaining module 902 is further configured to obtain a skill drop point position of the first virtual character in the game scene image;
the training module 903 is configured to train the neural network model according to the game scene image, the feature information of the second virtual character, and the skill placement position, where the feature information of the second virtual character and the skill placement position are used as tag data of the game scene image.
Optionally, the first obtaining module 901 is further configured to determine a target virtual character in the game scene image; acquiring a plurality of target points on a target virtual role; performing ray detection on the first virtual character and a plurality of target points, and determining whether a collision body exists between the first virtual character and the target points; if no collision body exists between the first virtual character and the at least one target point, determining that the target virtual character is in the visual field range; acquiring the distance between each target virtual character in the visual field and the middle line position of the visual field; determining a target virtual character with the minimum distance from the center line position of the visual field range as a second virtual character; and acquiring the characteristic information of the second virtual role.
Optionally, the second obtaining module 902 is further configured to divide the game scene map where the game scene image is located into a plurality of areas, where the shapes and sizes of the plurality of areas are the same; determining a landing area of the game skill of the first virtual character in the plurality of areas; and determining the drop point area as the skill drop point position.
Optionally, the training module 903 is further configured to train the neural network model according to digital features and image features, where the digital features include feature information and skill placement positions of the second virtual character, and the image features include a game scene image and a game scene map where the game scene image is located.
Optionally, the training module 903 is further configured to perform one-dimensional processing on the image features to obtain a plurality of one-dimensional arrays; fusing the plurality of one-dimensional arrays and the digital characteristics to obtain a target array; and inputting the target array into the neural network model for classification processing to obtain the characteristic information and the skill drop point position of the second virtual role.
Optionally, the digital feature further comprises: the feature information of the first virtual character, the environmental feature of the game scene image, and the map feature of the game scene map.
The game image processing apparatus provided in this embodiment is used to implement the technical solution of the game image processing method in the foregoing method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 10 is a schematic structural diagram of a second embodiment of a game image processing apparatus according to an embodiment of the present application, where the game image processing apparatus 1000 includes: the game system comprises a processing module 1001 and a display module 1002, wherein the display module 1002 is used for displaying a graphical user interface, the graphical user interface comprises a game scene image, and the game scene image comprises a plurality of virtual characters;
the processing module 1001 is configured to input the game scene image into the trained neural network model for processing, so as to obtain feature information of a second virtual character in the plurality of virtual characters, and a skill drop point position of a first virtual character in the plurality of virtual characters, where the second virtual character is located within a visual field range of the first virtual character and is closest to a center line position of the visual field range.
Optionally, the processing module 1001 is further configured to train the neural network model according to the game scene image, the feature information of the second virtual character and the skill placement position before inputting the game scene image into the trained neural network model for processing, so as to obtain the trained neural network model, where the feature information of the second virtual character and the skill placement position are used as tag data of the game scene image.
The game image processing apparatus provided in this embodiment is used to implement the technical solution of the game image processing method in the foregoing method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application, and as shown in fig. 11, the terminal device 1100 includes:
processor 1111, memory 1112, display 1113;
the memory 1112 is used for storing programs and data, and the processor 1111 calls the programs stored in the memory to execute the technical scheme of the processing method of the game image provided by the foregoing method embodiment.
In the terminal device, the memory 1112 and the processor 1111 are electrically connected directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines, such as a bus. The memory 1112 stores therein computer-executable instructions for implementing a game image processing method, including at least one software functional module, which may be stored in the memory in the form of software or firmware, and the processor 1111 executes various functional applications and data processing by running the software programs and modules stored in the memory 1112.
Fig. 12 is a schematic structural diagram of a training apparatus according to an embodiment of the present application, and as shown in fig. 12, the training apparatus 1200 includes:
a processor 1211, a memory 1212, a display 1213;
the memory 1212 is used to store programs and data, and the processor 1211 calls the programs stored in the memory to execute the technical solution of the processing method of the game image provided by the foregoing method embodiment.
In the above-described training device, the memory 1212 and the processor 1211 are electrically connected, directly or indirectly, to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines, such as a bus. The memory 1212 stores therein computer-executable instructions for implementing a game image processing method, including at least one software function module that may be stored in the memory in the form of software or firmware, and the processor 1211 executes various function applications and data processing by running the software programs and modules stored in the memory 1212.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory is used for storing programs, and the processor executes the programs after receiving the execution instructions. Further, the software programs and modules within the aforementioned memories may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The embodiment of the application further provides a computer-readable storage medium, which includes a program, and the program is used for realizing the technical scheme of the game image processing method provided in the method embodiment when being executed by a processor.
The present application further provides a computer program product comprising: and the computer program is used for realizing the technical scheme of the game image processing method provided by the embodiment of the method when being executed by the processor.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A method of processing game images, the method comprising:
acquiring feature information of a second virtual character which is positioned in a visual field range of the first virtual character and is closest to a center line position of the visual field range in the game scene image;
acquiring a skill drop point position of the first virtual character in the game scene image;
training a neural network model according to the game scene image, the feature information of the second virtual character and the skill placement position, wherein the feature information of the second virtual character and the skill placement position are used as label data of the game scene image.
2. The method of claim 1, wherein the acquiring of the feature information of the second virtual character located in the visual field of the first virtual character and closest to the centerline position of the visual field in the game scene image comprises:
determining a target virtual character in the game scene image;
acquiring a plurality of target points on the target virtual role;
performing ray detection on the first virtual character and the plurality of target points, and determining whether a collision body exists between the first virtual character and the target points;
if no collision body exists between the first virtual character and at least one target point, determining that the target virtual character is in the visual field range;
acquiring the distance between each target virtual character in the visual field range and the middle line position of the visual field range;
determining the target virtual character having the smallest distance from the center line position of the visual field as the second virtual character;
and acquiring the characteristic information of the second virtual role.
3. The method of claim 1, wherein the obtaining of the skill placement location of the first virtual character in the game scene image comprises:
dividing a game scene map where the game scene images are located into a plurality of areas, wherein the shapes and the sizes of the areas are the same;
determining a landing area of the game skill of the first virtual character in the plurality of areas;
determining the drop point area as the skill drop point location.
4. The method of claim 1, wherein training a neural network model based on the game scene image, the feature information of the second virtual character, and the skill drop point location comprises:
training a neural network model according to digital features and image features, wherein the digital features comprise feature information of the second virtual character and the skill drop point position, and the image features comprise the game scene image and a game scene map where the game scene image is located.
5. The method of claim 4, wherein training the neural network model based on the digital features and the image features comprises:
carrying out one-dimensional processing on the image characteristics to obtain a plurality of one-dimensional arrays;
fusing the plurality of one-dimensional arrays and the digital features to obtain a target array;
and inputting the target array into a neural network model for classification processing to obtain the characteristic information and the skill drop point position of the second virtual role.
6. The method of claim 4 or 5, wherein the digital signature further comprises: feature information of the first virtual character, an environmental feature of the game scene image, and a map feature of the game scene map.
7. A game image processing method is characterized in that a terminal device provides a graphical user interface, the graphical user interface comprises a game scene image, the game scene image comprises a plurality of virtual characters, and the method comprises the following steps:
and inputting the game scene image into a trained neural network model for processing to obtain the characteristic information of a second virtual character in the virtual characters and the skill drop point position of a first virtual character in the virtual characters, wherein the second virtual character is positioned in the visual field range of the first virtual character and is closest to the central line position of the visual field range.
8. The method of claim 7, wherein before inputting the game scene image into the trained neural network model for processing, the method further comprises:
and training a neural network model according to the game scene image, the feature information of the second virtual character and the skill placement position to obtain a trained neural network model, wherein the feature information of the second virtual character and the skill placement position are used as label data of the game scene image.
9. A game image processing apparatus, comprising:
the first acquisition module is used for acquiring the characteristic information of a second virtual character which is positioned in the visual field range of the first virtual character and is closest to the center line position of the visual field range in the game scene image;
the second acquisition module is used for acquiring the skill drop point position of the first virtual character in the game scene image;
and the training module is used for training a neural network model according to the game scene image, the characteristic information of the second virtual character and the skill drop point position, wherein the characteristic information of the second virtual character and the skill drop point position are used as label data of the game scene image.
10. A game image processing apparatus, comprising: the game system comprises a processing module and a display module, wherein the display module is used for displaying a graphical user interface, the graphical user interface comprises a game scene image, and the game scene image comprises a plurality of virtual roles;
the processing module is used for inputting the game scene image into a trained neural network model for processing to obtain feature information of a second virtual character in the virtual characters and a skill drop point position of a first virtual character in the virtual characters, wherein the second virtual character is located in a visual field range of the first virtual character and is closest to a center line position of the visual field range.
11. An exercise apparatus, comprising:
a processor, a memory, a display;
the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the game image processing method of any one of claims 1 to 6.
12. A terminal device, comprising:
a processor, a memory, a display;
the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the game image processing method of claim 7 or 8.
13. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing a game image processing method according to any one of claims 1 to 8.
14. A computer program product comprising a computer program for implementing a method of processing a game image according to any one of claims 1 to 8 when executed by a processor.
CN202111224842.1A 2021-10-21 2021-10-21 Game image processing method, device, equipment and storage medium Pending CN113975812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111224842.1A CN113975812A (en) 2021-10-21 2021-10-21 Game image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111224842.1A CN113975812A (en) 2021-10-21 2021-10-21 Game image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113975812A true CN113975812A (en) 2022-01-28

Family

ID=79739811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111224842.1A Pending CN113975812A (en) 2021-10-21 2021-10-21 Game image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113975812A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115531877A (en) * 2022-11-21 2022-12-30 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine
CN116808590A (en) * 2023-08-25 2023-09-29 腾讯科技(深圳)有限公司 Data processing method and related device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115531877A (en) * 2022-11-21 2022-12-30 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine
CN115531877B (en) * 2022-11-21 2023-03-07 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine
CN116808590A (en) * 2023-08-25 2023-09-29 腾讯科技(深圳)有限公司 Data processing method and related device
CN116808590B (en) * 2023-08-25 2023-11-10 腾讯科技(深圳)有限公司 Data processing method and related device

Similar Documents

Publication Publication Date Title
KR102523888B1 (en) Method, Apparatus and Device for Scheduling Virtual Objects in a Virtual Environment
US20240189718A1 (en) Game character behavior control method and apparatus, storage medium, and electronic device
JP2022033164A (en) Training of artificial intelligence (AI) model using cloud gaming network
US20220383649A1 (en) System and method for facilitating graphic-recognition training of a recognition model
CN111744187B (en) Game data processing method and device, computer and readable storage medium
CN113975812A (en) Game image processing method, device, equipment and storage medium
CN112733802A (en) Image occlusion detection method and device, electronic equipment and storage medium
CN111249742B (en) Cheating user detection method and device, storage medium and electronic equipment
CN109999496A (en) Control method, device and the electronic device of virtual objects
CN112121419B (en) Virtual object control method, device, electronic equipment and storage medium
CN112232164A (en) Video classification method and device
Kunanusont et al. General video game ai: Learning from screen capture
CN111589157B (en) AI model using method, apparatus and storage medium
CN110149553A (en) Treating method and apparatus, storage medium and the electronic device of image
CN112272295A (en) Method for generating video with three-dimensional effect, method for playing video, device and equipment
CN112138394B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111191542B (en) Method, device, medium and electronic equipment for identifying abnormal actions in virtual scene
CN109683700A (en) The human-computer interaction implementation method and device of view-based access control model perception
CN114787738A (en) Formally secure tokenized reinforcement learning for visual input
WO2019144346A1 (en) Object processing method in virtual scene, device and storage medium
Lee et al. Game engine-driven synthetic data generation for computer vision-based safety monitoring of construction workers
Kirkland et al. Perception understanding action: adding understanding to the perception action cycle with spiking segmentation
CN114917590B (en) Virtual reality game system
US11704980B2 (en) Method, apparatus, and computer storage medium for outputting virtual application object
US20230025389A1 (en) Route generation system within a virtual environment of a game application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination