CN116785687A - Method and device for processing view angle in game, electronic equipment and storage medium - Google Patents

Method and device for processing view angle in game, electronic equipment and storage medium Download PDF

Info

Publication number
CN116785687A
CN116785687A CN202210249042.3A CN202210249042A CN116785687A CN 116785687 A CN116785687 A CN 116785687A CN 202210249042 A CN202210249042 A CN 202210249042A CN 116785687 A CN116785687 A CN 116785687A
Authority
CN
China
Prior art keywords
data
character
visual angle
target
angle control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210249042.3A
Other languages
Chinese (zh)
Inventor
周逸恒
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210249042.3A priority Critical patent/CN116785687A/en
Publication of CN116785687A publication Critical patent/CN116785687A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a processing method and device of view angles in games, electronic equipment and storage media, wherein the method comprises the following steps: acquiring self-state data and surrounding environment data of a target role; the method comprises the steps that through a pre-trained visual angle control model, self state data and surrounding environment data are used as input of the visual angle control model, and visual angle control data output by the visual angle control model are obtained; and adjusting the lens visual angle corresponding to the target role according to the visual angle control data. The scheme is used for improving the simulation degree of the robot player and improving the artificial intelligence level.

Description

Method and device for processing view angle in game, electronic equipment and storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to a method and apparatus for processing a viewing angle in a game, an electronic device, and a computer readable storage medium.
Background
NPC is an abbreviation for non-player character, a type of character in a game, meaning a non-player character, and refers to a game character in a game that is not manipulated by a real player, and may also be referred to as a robot player. In an electronic game, the NPC is generally controlled by artificial intelligence of a computer, and directly reads states of other nearby characters, so that an attack or movement judgment is made, and a human player does not need to observe left and right, so that a lens view angle of the robot player does not exist.
An important game experience in the current game is that a player can enter the viewing angle of an enemy after being knocked down and eliminated by the enemy player, so that the player can analyze opponent tactics and help the player to promote himself, the game experience of the player is facilitated, and the player who is not urgent to restart the next game can do something. The sightseeing and spitting groove also increases community transmission vigor and plays a role in supervising cheating by players to a certain extent.
However, since the robot player does not have a lens angle, the obsolete player character can only jump to the lens angle of other players, so that the robot player can be easily distinguished, the fidelity of the robot player is reduced, and the artificial intelligence level is reduced.
Disclosure of Invention
The embodiment of the application provides a processing method of a view angle in a game, which is used for improving the simulation degree of a robot player and improving the artificial intelligence level.
The embodiment of the application provides a processing method of a view angle in a game, which comprises the following steps:
acquiring self-state data and surrounding environment data of a target role;
the self state data and the surrounding environment data are used as the input of the visual angle control model through a pre-trained visual angle control model, and visual angle control data output by the visual angle control model are obtained;
and adjusting the lens visual angle corresponding to the target role according to the visual angle control data.
In an embodiment, the step of acquiring self-state data and surrounding environment data of the target character includes:
and acquiring self state data and surrounding environment data of the target character in response to the hostile character being defeated by the target character or in response to a teammate character of the target character being defeated or receiving a sightseeing request for the target character.
In one embodiment, the method further comprises:
when the hostile character is defeated by the target character, pushing a virtual scene picture acquired under a lens view angle corresponding to the target character to a game client corresponding to the hostile character for display;
or alternatively, the process may be performed,
when the teammate role of the target role is defeated, pushing the virtual scene picture acquired under the lens view angle corresponding to the target role to a game client corresponding to the teammate role for display;
or alternatively, the process may be performed,
when a sightseeing request aiming at the target role is received, pushing a virtual scene picture acquired under a lens view angle corresponding to the target role to a game client corresponding to the sightseeing request for display.
In one embodiment, the target character is a robot character.
In an embodiment, the self-state data includes a position where the self-state data is located, a health state, and a prop possession;
the surrounding data includes a position and/or a state of the focal thing within a preset range of the target character.
In an embodiment, the view angle control data includes a keyboard input instruction, a mouse movement track, a somatosensory device key instruction, a somatosensory device movement track or a screen touch track.
In an embodiment, the view angle control data includes a mouse moving track, where the mouse moving track is a straight line segment from a start point to an end point; the adjusting the lens view angle corresponding to the target role according to the view angle control data includes:
according to the shake shapes corresponding to the mouse tracks with different distances, superposing the shake shapes of the mouse tracks with the same distance on the mouse moving track to obtain a target track;
and adjusting the lens visual angle corresponding to the target role according to the target track.
In an embodiment, before the shake shapes of the mouse tracks with the same distance are superimposed on the mouse movement track according to the shake shapes corresponding to the mouse tracks with different distances to obtain the target track, the method further includes:
dividing mouse tracks of different players into a plurality of batches according to the moving distance;
dividing the mouse tracks of the same batch into a plurality of categories aiming at the mouse tracks of each batch;
and fitting the mouse tracks of the same category in the same batch to obtain the shake shapes corresponding to the mouse tracks with different moving distances.
In an embodiment, before the self-state data and the ambient environment data are used as input of the view angle manipulation model by the pre-trained view angle manipulation model, the method further comprises:
and performing machine learning according to view angle control data corresponding to the multiple player characters under different self state data and surrounding environment data, and training to obtain the view angle control model.
In an embodiment, before the training to obtain the view angle manipulation model according to the view angle manipulation data corresponding to the plurality of player characters under different self state data and surrounding environment data, the method further includes:
a plurality of player characters of a preset proportion are randomly selected from different segments and different types of games.
The embodiment of the application also provides a processing device of the view angle in the game, which comprises the following steps:
the data acquisition module is used for acquiring self state data and surrounding environment data of the target role;
the data output module is used for obtaining the visual angle control data output by the visual angle control model by taking the self state data and the surrounding environment data as the input of the visual angle control model through the pre-trained visual angle control module;
and the lens adjusting module is used for adjusting the lens visual angle corresponding to the target role according to the visual angle control data.
The embodiment of the application also provides electronic equipment, which comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the above-described in-game view processing methods.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program executable by a processor to perform any of the above-described in-game view angle processing methods.
According to the technical scheme provided by the embodiment of the application, the visual angle model is built to predict the visual angle control data of the target character under different self state data and surrounding environment data, so that the lens visual angle corresponding to the target character is controlled based on the visual angle control data predicted by the model, and a moving mode similar to the lens visual angle of the player character can be built for the robot character. When a player needs to watch the game view angle of the robot character, the robot character is not easy to find, so that the simulation degree of the robot player is improved, and better game experience is provided for the player.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is an application scenario schematic diagram of a method for processing a view angle in a game according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for processing a view angle in a game according to an embodiment of the present application;
fig. 4 is a schematic flow chart of acquiring a shake shape corresponding to a mouse track according to an embodiment of the present application;
fig. 5 is a detailed flowchart of step S330 in the corresponding embodiment of fig. 3;
fig. 6 is a block diagram of a processing device for in-game viewing angle according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is an application scenario schematic diagram of a method for processing a view angle in a game according to an embodiment of the present application. The application scene comprises a client 101 and a server 102, wherein the client 101 and the server 102 are communicated through a wireless network. The method provided in the following embodiments of the present application may be executed by the server 102, may be executed by the client 101, or may be executed by the client 101 and the server 102 together.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and the electronic device 200 may be used to execute the method for processing a view angle in a game according to the embodiment of the present application. The electronic device 200 includes: at least one processor 203, at least one memory 202, and a bus 201, the bus 201 being used to enable connected communication of these components.
In an embodiment, the electronic device 200 may be a user terminal running a game, such as a host computer, a tablet computer, a smart phone, etc., for executing a method for processing props in the game.
In one embodiment, memory 202 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, including, but not limited to, random access Memory 202 (Random Access Memory, RAM), read Only Memory 202 (ROM), static random access Memory 202 (Static Random Access Memory, SRAM), programmable Read Only Memory 202 (Programmable Read-Only Memory, PROM), erasable Read Only Memory 202 (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory 202 (Electric Erasable Programmable Read-Only Memory, EEPROM).
In one embodiment, the processor 203 may be a general purpose processor including, but not limited to, a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor 203 may be any conventional processor or the like, the processor 203 being a control center of the electronic device 200, with various interfaces and lines connecting various portions of the entire electronic device 200. The processor 203 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
In an embodiment, fig. 2 illustrates a processor 203 and a memory 202, where the processor 203 and the memory 202 are connected through a bus 201, and the memory 202 stores instructions executable by the processor 203, so that the electronic device 200 can execute all or part of the methods in the following embodiments to implement the processing method in the view of the game.
In many combat games, not only the actual human player character, but also the robot character, and the behavior of the robot character is programmed. In the game, after the genuine player character is eliminated in the fight, the player can choose to defeat the player's view angle for the fight, thereby being beneficial to analyzing tactics and improving the game level. However, the robot character does not have a game view angle, and if the player is defeated by the robot character, the player cannot see the view angle of the robot character, so that the player can easily distinguish that the player is defeated by the robot character.
Fig. 3 is a flowchart illustrating a method for processing a view angle in a game according to an embodiment of the present application, where the method may be performed by the electronic device 200 shown in fig. 2, and the method includes S310-S330.
S310: and acquiring self state data and surrounding environment data of the target character.
The target character can be a robot character or a player character, and when the target character is the robot character, a visual angle moving mode similar to the player character can be constructed for the robot character, so that the simulation degree of the robot character is improved. When the target character is a player character, the lens visual angle can be adjusted in a self-adaptive manner along with the change of the state data of the target character and the change of the surrounding environment data, so that the user operation is reduced, and the intelligent level is improved.
The self-status data may include the location of the player character itself, the health status, and the owned prop status. The position of the game machine comprises a moving state, and the current position coordinates, the current direction, the moving speed and the like of the game scene. The moving state includes stationary, moving, combat, etc. The health state includes a life value of a game character, a armor protection value, durability of equipment, and the like. The props comprise a plurality of props such as a recovery life value, a recovery armor defending value, a recovery equipment durability and the like.
The surrounding data includes the location and/or status of the focal thing within a preset range of the target character. The things of interest include other game characters, pickable items, interactable tasks, buildings, and the like. The location and/or status of other game characters may include the location of other game characters, observable attributes (e.g., occupation, weapon, blood volume, etc.), status (e.g., stationary, moving, combat, etc.), whether interactions with themselves occur, etc. And can also comprise the time length of the game, the current safety circle size, the circle shrinkage condition and the like.
In one embodiment, self-state data and surrounding environment data of the target character are obtained in response to an adversary character being defeated by the target character or in response to a teammate character of the target character being defeated or a request for a sightseeing of the target character being received.
In order to solve the problem of computing resources, the lens view angle of the target role can be simulated in real time without the need of simulating the lens view angle of the target role, and when any one of the three conditions is met, the self state data and the surrounding environment data of the target role are acquired, and the lens view angle corresponding to the target role is simulated.
In order to save computing resources, in an embodiment, the hostile character may only acquire its own state data and surrounding environment data when the hostile character is knocked down by the target character, so that a player corresponding to a subsequent hostile character may see a game scene under the view of the target character.
In order to save computing resources, in an embodiment, the teammate character can only acquire the self-state data and the surrounding environment data of the target character when the teammate character of the target character is defeated, so that players corresponding to subsequent teammate characters can see the game scene under the view angle of the target character.
When a third party needs to watch a battle, a sightseeing request aiming at the target role can be sent to the server, and in order to save computing resources, the server can acquire self state data and surrounding environment data of the target role only when receiving the sightseeing request aiming at the target role, so that a subsequent third party can conveniently see a game scene under the view angle of the target role.
S320: and taking the self state data and the surrounding environment data as the input of the visual angle control model through a pre-trained visual angle control model, and obtaining visual angle control data output by the visual angle control model.
In one embodiment, before the self state data and the surrounding environment data are used as the input of the view angle control model through the pre-trained view angle control model, the method further comprises: and performing machine learning according to view angle control data corresponding to the multiple player characters under different self state data and surrounding environment data, and training to obtain the view angle control model.
Before machine learning, a plurality of player characters of a preset proportion are selected randomly from different segments and different types of games.
The player character in the game can be divided into different segments according to the game fight data, and the player character can acquire experience values by winning the fight to improve the segments. And acquiring game operation data of multiple real player characters in different segments and different types of games, wherein the game operation data mainly comprises view angle control data.
The visual angle control data comprise keyboard input instructions, mouse movement tracks, somatosensory equipment key instructions, somatosensory equipment movement tracks or screen touch tracks. The method for moving the visual angle in different games is different, and in the computer end game, the visual angle movement can be controlled by a plurality of keys such as an up-down left-right key, a "wasd" key and a player character which are arranged on a keyboard and set a keyboard shortcut key by themselves. Or the game view angle is switched along with the movement of the mouse on the game virtual interface when the mouse is moved. In a hand tour, the game view angle may typically be moved according to the touch trajectory of a finger on the screen.
In order to reduce the complexity of model training and reduce the calculated amount, the acquired view angle control data mainly control a straight line formed by the starting point and the end point of the view angle moving track. And performing machine learning according to the acquired self state data and surrounding environment data of the multiple player characters and corresponding visual angle control data, and training to obtain a visual angle control model.
Specifically, the self state data and the surrounding environment data of the player character can be used as the input of the neural network model, the output of the neural network model is adjusted, and the error between the output result and the known visual angle control data of the player character is as small as possible, so that the visual angle control model trained by the neural network model is obtained.
When the method is applied, the acquired self-state data and surrounding environment data of the target role can be input into a training visual angle control model to obtain corresponding output visual angle control data. The obtained view angle manipulation data may include a mouse movement track, and the mouse movement track may be a straight line segment formed from a start point to an end point.
S330: and adjusting the lens visual angle corresponding to the target role according to the visual angle control data.
The adjustment of the lens angle of the target character according to the angle of view manipulation data may refer to the adjustment manner of the lens angle of the existing real player character. For example, the view angle manipulation data may be a mouse moving track, and based on the direction of the mouse moving track, the position and direction of the virtual lens corresponding to the target character may be adjusted, so as to change the lens view angle corresponding to the target character.
In an embodiment, in order to simplify the model calculation, the view angle manipulation data may be a straight line segment from point to point, so based on the view angle manipulation data of the target character obtained in S320, the shake shape corresponding to the obtained mouse track may be superimposed, so as to obtain a more realistic moving track. And then, the lens visual angle of the target role can be adjusted according to the corrected movement track, so that the movement mode of the lens visual angle is more similar to the movement mode of a human player, and the robot role has higher simulation degree.
According to the technical scheme provided by the embodiment of the application, the visual angle model is built to predict the visual angle control data of the target character under different self state data and surrounding environment data, so that the lens visual angle corresponding to the target character is controlled based on the visual angle control data predicted by the model, and a moving mode similar to the lens visual angle of the player character can be built for the robot character. When a player needs to watch the game view angle of the robot character, the robot character is not easy to find, so that the simulation degree of the robot player is improved, and better game experience is provided for the player.
In an embodiment, when the hostile character is defeated by the target character, the virtual scene image collected under the lens view angle corresponding to the target character may be pushed to the game client corresponding to the hostile character for display.
The hostile character is a character that does not belong to the same camp as the target character. When the target character is a robot character, the hostile characters are all player characters that can be defeated by the robot character. When a player character is defeated by a robot character, the player can view the game scene from the perspective of the robot character.
In an embodiment, when a teammate character of the target character is defeated, the virtual scene image collected under the lens view angle corresponding to the target character may be pushed to the game client corresponding to the teammate character for display.
The teammate character is a character belonging to the same camp as the target character. The target character may be a robot character and the teammate character may be a player character belonging to the same camp as the target character. When the teammate character is defeated, a player corresponding to the teammate character can view the game scene at the lens angle of the robot character.
In an embodiment, when a sightseeing request for the target role is received, a virtual scene image collected under a lens view angle corresponding to the target role is pushed to a game client corresponding to the sightseeing request for display.
When the server receives a sightseeing request of a third party for a target role, the third party is indicated to need to watch a game scene in the view angle of the target role, and at the moment, a virtual scene picture acquired under the view angle of a lens corresponding to the target role can be pushed to a game client of the third party for display, so that sightseeing can be realized in the third party, and the third party refers to users except players corresponding to the roles in the game.
Referring to fig. 4, before S330 in fig. 3, a jitter shape corresponding to the mouse track needs to be obtained, which specifically includes S410-S430.
S410: the mouse tracks of different players are divided into a plurality of batches according to the moving distance.
The mouse tracks of different players are different, so that the linear distances formed by the starting point and the end point of the mouse track are various, the mouse tracks with the same linear distance value are divided into a plurality of batches, and the mouse tracks with different distances in a plurality of batches are obtained. The longer the distance, the more intense the jitter generated by the mouse track.
S420: for each batch of mouse trajectories, the same batch of mouse trajectories are divided into a plurality of categories.
The mouse moving track can be divided into a plurality of categories by using a clustering algorithm for each batch, so that the offset in opposite directions can be counteracted after all the mouse moving tracks are added together to average, and errors are caused.
S430: and fitting the mouse tracks of the same category in the same batch to obtain the shake shapes corresponding to the mouse tracks with different moving distances.
The Fourier series can be used for fitting the mouse tracks of the same category at the same distance to obtain the simulated jitter shapes corresponding to the mouse tracks of various distances.
On the basis of the above embodiment, referring to fig. 5, S330 of fig. 3 specifically includes S510-S520.
S510: and according to the shake shapes corresponding to the mouse tracks with different distances, superposing the shake shapes of the mouse tracks with the same distance on the mouse moving track to obtain the target track.
The process of fig. 4 can obtain the shake shapes corresponding to the mouse tracks with different moving distances, and according to the distance of the mouse moving track indicated by the visual angle control data, the shake shapes of the mouse tracks with the same distance can be selected to be overlapped on the mouse moving track, so as to obtain the track which is closer to the operation of a real person, namely the target track.
S520: and adjusting the lens visual angle corresponding to the target role according to the target track.
The process simulates the lens visual angle of the robot character, the lens visual angle can move along with the simulated target track, and the target track is more close to the operation of a real person because of the operations such as fitting, superposition and the like. When the opponent character is defeated by the robot character, it is less likely to be found as the robot character continues to watch the game with the lens angle of the robot character.
The following is an embodiment of the apparatus of the present application, which may be used to execute the embodiment of the method for processing a view angle in a game of the present application. For details not disclosed in the embodiment of the apparatus of the present application, please refer to an embodiment of a method for processing a viewing angle in a game of the present application.
Fig. 6 is a schematic structural diagram of a processing device for in-game viewing angle according to an embodiment of the present application, where the device 600 may include:
the data acquisition module 610 is configured to acquire self-state data and surrounding environment data of a target character;
the data output module 620 is configured to obtain, through the pre-trained view angle manipulation module, the view angle manipulation data output by the view angle manipulation model by using the self-state data and the surrounding environment data as inputs of the view angle manipulation model;
the lens adjustment module 630 is configured to adjust a lens angle corresponding to the target role according to the angle control data.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the processing method of the view angle in the game, and will not be described herein again.
The embodiment of the application also provides a storage medium, which comprises: a program, when executed by the electronic device 100, causes the electronic device 100 to perform all or part of the method flow in the above embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory 102 (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD), etc. The storage medium may also include a combination of the types of memory 102 described above.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the several embodiments provided in the present application, the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method for processing a viewing angle in a game, comprising:
acquiring self-state data and surrounding environment data of a target role;
the self state data and the surrounding environment data are used as the input of the visual angle control model through a pre-trained visual angle control model, and visual angle control data output by the visual angle control model are obtained;
and adjusting the lens visual angle corresponding to the target role according to the visual angle control data.
2. The method of claim 1, wherein the step of acquiring the own state data and the surrounding environment data of the target character comprises:
and acquiring self state data and surrounding environment data of the target character in response to the hostile character being defeated by the target character or in response to a teammate character of the target character being defeated or receiving a sightseeing request for the target character.
3. The method according to claim 1, wherein the method further comprises:
when the hostile character is defeated by the target character, pushing a virtual scene picture acquired under a lens view angle corresponding to the target character to a game client corresponding to the hostile character for display;
or alternatively, the process may be performed,
when a teammate role of the target role is defeated, pushing a virtual scene picture acquired under a lens view angle corresponding to the target role to a game client corresponding to the teammate role for display;
or alternatively, the process may be performed,
when a sightseeing request aiming at the target role is received, pushing a virtual scene picture acquired under a lens view angle corresponding to the target role to a game client corresponding to the sightseeing request for display.
4. The method of claim 1, wherein the target character is a robot character.
5. The method of claim 1, wherein the self-status data includes a location of the self, a health status, and an possession of the prop;
the surrounding data includes a position and/or a state of the focal thing within a preset range of the target character.
6. The method of claim 1, wherein the view manipulation data comprises a keyboard input command, a mouse movement trajectory, a somatosensory device key command, a somatosensory device movement trajectory, or a screen touch trajectory.
7. The method of claim 1, wherein the view manipulation data comprises a mouse movement trajectory, the mouse movement trajectory being a straight line segment from a start point to an end point; the adjusting the lens view angle corresponding to the target role according to the view angle control data includes:
according to the shake shapes corresponding to the mouse tracks with different distances, superposing the shake shapes of the mouse tracks with the same distance on the mouse moving track to obtain a target track;
and adjusting the lens visual angle corresponding to the target role according to the target track.
8. The method according to claim 7, wherein before superimposing the shake shapes of the mouse trajectories of the same distance on the mouse movement trajectory according to the shake shapes of the mouse trajectories of different distances to obtain the target trajectory, the method further comprises:
dividing mouse tracks of different players into a plurality of batches according to the moving distance;
dividing the mouse tracks of the same batch into a plurality of categories aiming at the mouse tracks of each batch;
and fitting the mouse tracks of the same category in the same batch to obtain the shake shapes corresponding to the mouse tracks with different moving distances.
9. The method of claim 1, wherein prior to said entering of said self state data and ambient environment data as said view manipulation model by a pre-trained view manipulation model, said method further comprises:
and performing machine learning according to view angle control data corresponding to the multiple player characters under different self state data and surrounding environment data, and training to obtain the view angle control model.
10. The method of claim 9, wherein prior to training the view manipulation model based on view manipulation data corresponding to a plurality of player characters under different self state data and ambient environment data, the method further comprises:
a plurality of player characters of a preset proportion are randomly selected from different segments and different types of games.
11. A processing device for in-game viewing angles, comprising:
the data acquisition module is used for acquiring self state data and surrounding environment data of the target role;
the data output module is used for obtaining the visual angle control data output by the visual angle control model by taking the self state data and the surrounding environment data as the input of the visual angle control model through a pre-trained visual angle control model;
and the lens adjusting module is used for adjusting the lens visual angle corresponding to the target role according to the visual angle control data.
12. An electronic device, the electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of processing an in-game view of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of processing an in-game view according to any one of claims 1-10.
CN202210249042.3A 2022-03-14 2022-03-14 Method and device for processing view angle in game, electronic equipment and storage medium Pending CN116785687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210249042.3A CN116785687A (en) 2022-03-14 2022-03-14 Method and device for processing view angle in game, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210249042.3A CN116785687A (en) 2022-03-14 2022-03-14 Method and device for processing view angle in game, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116785687A true CN116785687A (en) 2023-09-22

Family

ID=88038993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210249042.3A Pending CN116785687A (en) 2022-03-14 2022-03-14 Method and device for processing view angle in game, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116785687A (en)

Similar Documents

Publication Publication Date Title
CN108211358B (en) Information display method and device, storage medium and electronic device
CN112169339A (en) Customized model for simulating player game play in video game
KR102397507B1 (en) Automated player control takeover in a video game
CN111298449B (en) Control method and device in game, computer equipment and storage medium
US20210366183A1 (en) Glitch detection system
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN113952723A (en) Interactive method and device in game, computer equipment and storage medium
CN113244608A (en) Control method and device of virtual object and electronic equipment
TW202227172A (en) Method of presenting virtual scene, device, electrical equipment, storage medium, and computer program product
CN111191542B (en) Method, device, medium and electronic equipment for identifying abnormal actions in virtual scene
CN111773682A (en) Method and device for prompting shooting direction, electronic equipment and storage medium
CN111389004B (en) Virtual character control method, storage medium and processor
CN113509726B (en) Interaction model training method, device, computer equipment and storage medium
CN111558226B (en) Method, device, equipment and storage medium for detecting abnormal operation behaviors
CN114247146A (en) Game display control method and device, electronic equipment and medium
CN114307147A (en) Interactive method and device in game, electronic equipment and storage medium
CN116785687A (en) Method and device for processing view angle in game, electronic equipment and storage medium
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
Vitek et al. Intelligent agents in games: Review with an open-source tool
CN111569425B (en) Method and device for controlling attribute value of virtual character, electronic equipment and storage medium
CN114225413A (en) Collision detection method and device, electronic equipment and storage medium
CN111744201B (en) Automatic player control takeover in video game
CN113769396B (en) Interactive processing method, device, equipment, medium and program product of virtual scene
CN113663329B (en) Shooting control method and device for virtual character, electronic equipment and storage medium
US20230149814A1 (en) Information processing apparatus, recording medium, and usage object creating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination