CN118059465A - Virtual character control method and device, electronic equipment and storage medium - Google Patents

Virtual character control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118059465A
CN118059465A CN202410337352.XA CN202410337352A CN118059465A CN 118059465 A CN118059465 A CN 118059465A CN 202410337352 A CN202410337352 A CN 202410337352A CN 118059465 A CN118059465 A CN 118059465A
Authority
CN
China
Prior art keywords
skill
virtual
animation
controlled
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410337352.XA
Other languages
Chinese (zh)
Inventor
朱棣文
张峻霆
赵天旻
黄腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410337352.XA priority Critical patent/CN118059465A/en
Publication of CN118059465A publication Critical patent/CN118059465A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method, a device, an electronic device and a storage medium of virtual roles, wherein virtual interaction scenes for enabling virtual roles of different camps to interact are provided on a graphical user interface of terminal equipment, and the method comprises the following steps: responding to a first operation aiming at the controlled virtual character, and controlling the controlled virtual character to enter a heuristic behavior state, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and carries out ball holding interaction on a virtual ball; in the state that the controlled virtual character is in the heuristic action, responding to a second operation aiming at the controlled virtual character, and controlling the controlled virtual character to execute the heuristic action; and responding to the controlled virtual character to exit the heuristic behavior state, and controlling the controlled virtual character to execute target skills in the virtual interaction scene based on the action parameters of the heuristic action executed by the controlled virtual character. The application can simplify the operation of the user and improve the man-machine interaction efficiency.

Description

Virtual character control method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer application technologies, and in particular, to a virtual character control method, device, electronic apparatus, and storage medium.
Background
With the continuous development of game technology, a great number of games with different themes are emerging to meet the requirements of players, and sports games are one of them. In the related art, a plurality of controls are displayed on a scene picture of a sports competition game running in a terminal device in an overlapping manner, wherein the controls comprise a moving control and at least one action control, the moving control can be used for controlling a virtual character to move in a virtual interaction scene, and the action control is used for controlling the virtual character to execute corresponding competition actions.
Taking sports games as virtual ball games, in the game process, multiple virtual characters in the same camp are required to cooperate with each other to perform actions such as transmission, scoring and the like on the virtual ball, and actions such as defending and interception are required to be performed on the multiple virtual characters in hostile camp.
In general, in a sports competitive game, a plurality of virtual characters need to be controlled one by a game account to execute a competitive action on a virtual interaction scene, and in the related art, the control manner for the virtual characters is as follows: the virtual character is controlled to execute different competitive actions through selecting different action controls on the scene picture. However, the above-mentioned operations are complicated, and the input timing of the player is high, and the skill release failure is easily caused by a slight error.
Disclosure of Invention
In view of the above, embodiments of the present application at least provide a method, an apparatus, an electronic device, and a storage medium for controlling a virtual character, so as to overcome at least one of the above drawbacks.
In a first aspect, an exemplary embodiment of the present application provides a method for controlling a virtual character, where a virtual interaction scene for enabling virtual characters of different camps to interact is provided on a graphical user interface of a terminal device, where the virtual interaction scene has virtual spheres contending for the virtual characters of different camps, the method includes: responding to a first operation aiming at a controlled virtual character, and controlling the controlled virtual character to enter a heuristic behavior state, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere; controlling the controlled virtual character to execute a heuristic action in response to a second operation for the controlled virtual character when the controlled virtual character is in the heuristic behavior state; and responding to the controlled virtual character to exit the heuristic behavior state, and controlling the controlled virtual character to execute target skills in the virtual interaction scene based on action parameters of heuristic actions executed by the controlled virtual character.
In a second aspect, an embodiment of the present application further provides a control device for a virtual character, where a virtual interaction scene for enabling virtual characters of different camps to interact is provided on a graphical user interface of a terminal device, where the virtual interaction scene has virtual spheres contending for the virtual characters of different camps, and the device includes: the state control module is used for responding to a first operation aiming at a controlled virtual role, controlling the controlled virtual role to enter a heuristic behavior state, wherein the controlled virtual role is a first virtual role which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere; a first skill control module that, in response to a second operation for the controlled virtual character, controls the controlled virtual character to perform a heuristic action while the controlled virtual character is in the heuristic behavior state; and the second skill control module is used for controlling the controlled virtual role to execute target skills in the virtual interaction scene based on the action parameters of the heuristic action executed by the controlled virtual role in response to the controlled virtual role exiting the heuristic action state.
In a third aspect, an embodiment of the present application further provides an electronic device, a processor, a storage medium, and a bus, where the storage medium stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to perform steps of a method for controlling a virtual character as described above.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor performs the steps of the method for controlling a virtual character described above.
The virtual character control method, the device, the electronic equipment and the storage medium provided by the embodiment of the application can simplify the operation of a user and improve the efficiency of man-machine interaction.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 illustrates a flowchart of a control method of a virtual character provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of steps provided by an exemplary embodiment of the present application to demonstrate a first skill animation;
FIG. 3 is a flowchart illustrating steps provided by an exemplary embodiment of the present application for determining interaction location parameters corresponding to a controlled virtual character;
fig. 4 shows one of the flowcharts of the interaction between the terminal device and the server provided in the exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating steps provided by an exemplary embodiment of the present application for controlling a skill object to perform a target skill in a virtual interactive scenario;
FIG. 6 shows a second flowchart of interaction between a terminal device and a server provided by an exemplary embodiment of the present application;
FIG. 7 shows a third flowchart of interaction between a terminal device and a server provided by an exemplary embodiment of the present application;
Fig. 8 is a schematic diagram showing the structure of a control device of a virtual character provided in an exemplary embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be appreciated that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
It should be understood that in embodiments of the present application, "at least one" means one or more and "a plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and/or C" means comprising any 1 or any 2 or 3 of A, B, C.
It should be understood that in embodiments of the present application, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art based on embodiments of the application without making any inventive effort, fall within the scope of the application.
With the continuous development of game technology, a great number of games with different themes are emerging to meet the requirements of players, and sports games are one of them. Taking sports games as virtual ball games, in the game process, multiple virtual characters in the same camp are required to cooperate with each other to perform actions such as transmission, scoring and the like on the virtual ball, and actions such as defending and interception are required to be performed on the multiple virtual characters in hostile camp.
In the related art, a plurality of controls are displayed on a scene picture of a sports competition game running in a terminal device in an overlapping manner, wherein the controls comprise a moving control and at least one action control, the moving control can be used for controlling a virtual character to move in a virtual interaction scene, and the action control is used for controlling the virtual character to execute corresponding competition actions.
In a sports game, actions of a plurality of virtual characters executed on a virtual interaction scene need to be controlled by one game account, and in the related art, control modes for the virtual characters are as follows: the virtual character is controlled to execute different competitive actions through selecting different action controls on the scene picture. The operation process is complex, a player cannot well control the input time of different athletic actions, and skill release failure is easy to occur by a little error.
Taking a virtual sphere game as an example of a basketball game, after an attacking virtual character receives a basketball, the virtual character can be controlled to execute a probing step so as to be beneficial to better attack to achieve the goal of scoring. The probing step may refer to: in the case that the virtual character with the ball attack faces the basket or faces away from the basket, the virtual character can be controlled to take a tentative step by taking one foot as an axis and the other foot as an arc. This heuristic step may be either a small step or a large step.
The control process for the probing step in the related art is as follows: and controlling the virtual character to execute the trial step action in a double-rocker input mode. The control mode requires a relatively skilled control method for a player, and has relatively high requirement on the matching degree of the double rockers, so that the operation efficiency is relatively low and the misoperation is relatively high.
Furthermore, the heuristic steps performed in the related art are limited in direction selection, and generally only the heuristic step actions can be performed in both forward and rightward directions, with little direction selection, resulting in little threat to the skill. Therefore, the man-machine interaction efficiency is lower, the time length of game play is possibly increased, and the consumption of computing resources is increased.
Aiming at the problems in at least one aspect, the application provides a virtual character control scheme to improve the efficiency of man-machine interaction and reduce the consumption of computing resources.
First, names involved in the embodiments of the present application will be described.
Terminal equipment:
The terminal device according to the embodiment of the present application mainly refers to an electronic device capable of providing a User Interface (User Interface) to implement man-machine interaction, and in an exemplary application scenario, the terminal device may be used to provide game images (e.g., a relevant setting/configuration Interface in a game, an Interface for presenting a game scenario), and may be an intelligent device capable of performing control operations on virtual characters, where the terminal device may include, but is not limited to, any one of the following devices: smart phones, tablet computers, portable computers, desktop computers, gaming machines, personal Digital Assistants (PDAs), electronic book readers, MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video experts compression standard audio layer 4) players, and the like. The terminal device has installed and running therein an application program supporting a game scene, such as an application program supporting a three-dimensional game scene. The application may include, but is not limited to, any of a virtual reality application, a three-dimensional map program, a military simulation program, MOBA games (Multiplayer Online Battle Arena, multiplayer online tactical athletic games), multiplayer warfare survival games, third person shooter games (TPS, third-Personal Shooting Game). Alternatively, the application may be a stand-alone application, such as a stand-alone 3D (Three-dimensional) game program, or a network online application.
Graphical user interface:
Is an interface display format in which a person communicates with a computer, allowing a user to manipulate icons, logos, or menu options on a screen using an input device such as a mouse, a keyboard, and/or a joystick, and also allowing a user to manipulate icons or menu options on a screen by performing a touch operation on a touch screen of a touch terminal to select a command, start a program, or perform some other task, etc. In a game scenario, a game scenario interface may be displayed in the graphical user interface, along with a game configuration interface.
Virtual interaction scenario:
is a virtual environment that an application displays (or provides) when running on a terminal device or server, such as a game scene provided in a game session. Optionally, the virtual interaction scene is a simulation environment for the real world, or a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, and the virtual environment may be sky, land, sea, or the like. The virtual interaction scene is a scene of a complete game logic of a user control virtual character, and optionally, the virtual scene is also used for virtual environment fight between at least two virtual objects, and virtual resources available for at least two virtual characters are arranged in the virtual scene. By way of example, a game scene may include any one or more of the following elements: game background elements, game object elements, game prop elements, game material elements, and the like.
Virtual roles:
Refers to dynamic objects that can be controlled in a game scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. There may be multiple virtual characters in the virtual scene, which are virtual characters that the player manipulates (i.e., characters that the player controls through an input device, a touch screen), or artificial intelligence set in the virtual environment fight by training (ARTIFICIAL INTELLIGENCE, AI), or Non-player characters set in the game scene fight (Non-PLAYER CHARACTER, NPC). Alternatively, the avatar may include a avatar that plays in the game scene. Optionally, the number of virtual characters in the game scene fight is preset, or is dynamically determined according to the number of clients joining the fight, which is not limited by the embodiment of the present application. In one possible implementation, a user can control a virtual character to move in the virtual scene, e.g., control the virtual character to run, jump, crawl, etc., as well as control the virtual character to fight other virtual characters using skills, virtual props, etc., provided by the application. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual characters may be three-dimensional virtual models, each having its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the avatar may also be implemented using a 2.5-dimensional or 2-dimensional model, as embodiments of the application are not limited in this regard.
The virtual character control method in one embodiment of the application can be operated on a local terminal device or a server. When the method is run on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the virtual character control method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting game pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the present application provides a method for controlling a virtual character, and a graphical user interface is provided through a terminal device, where the terminal device may be a local terminal device (such as a local touch terminal) mentioned above, or may be a client device in a cloud interaction system mentioned above.
In order to facilitate understanding of the present application, the method, apparatus, electronic device and storage medium for controlling virtual roles provided in the embodiments of the present application are described in detail below.
Referring to fig. 1, a flowchart of a virtual character control method according to an exemplary embodiment of the present application is generally applied to a game server, for example, the cloud game server described above may also be operated on a local terminal device (hereinafter referred to as a terminal device), which is not limited in this aspect of the present application.
And providing a virtual interaction scene for enabling the virtual roles of different camps to interact on a graphical user interface of the terminal equipment, wherein the virtual interaction scene has virtual spheres competing for the virtual roles of different camps.
In the embodiment of the application, taking two camps as an example in the virtual game, the two camps compete for the virtual sphere through antagonism interaction, and the scoring event is triggered by the virtual sphere, so that the scores of different camps are settled. When the specified condition is met, the settlement of the game of the host can be triggered, for example, when a certain time is reached, the game ending time can be preset, when the time node is reached, the game of the host is directly settled, and when the game ending time is reached, the camping with higher score obtains the winning of the game. Or a certain camp first reaches a certain score, a game ending score can be preset, and when a certain camp reaches the game ending score, the game is determined to be ended, and the camp obtaining the game ending score obtains the win of the game, the countermeasure interaction is ended, and the settlement score determines which camp wins and which camp fails.
In one possible implementation manner, after a certain virtual character obtains the control right for the virtual sphere, a score interaction operation may be performed on the virtual sphere at a proper time (for example, when the virtual character is close to the target position and no blocking of other virtual characters exists), after the virtual character performs the score interaction operation on the virtual sphere, the virtual sphere is controlled to move towards the target position of the virtual interaction scene, if the virtual sphere enters the range of the target position, the virtual character and the camping where the virtual character is located may obtain corresponding scores.
For example, in the case where the virtual ball is a virtual basketball, the virtual basketball is controlled to enter the virtual basket (target position) by the score interaction for the virtual basketball, in the case where the virtual ball is a virtual football, the virtual football is controlled to enter the virtual goal (target position) by the score interaction for the virtual football, and the like.
In one possible implementation, when the virtual character performs the scoring interaction on the virtual sphere, the virtual sphere enters into the area where the target position is located, and when the score is calculated, the score is calculated according to the position where the virtual character is located when the scoring interaction is performed on the virtual sphere, for example, in the case that the virtual sphere is a virtual basketball, whether the goal is a three-way ball or a two-way ball is calculated according to the position where the virtual character is located (whether the virtual sphere is located outside the three-way line).
In one possible implementation, for a virtual character, a player may control the virtual character to perform a corresponding skill in a virtual interaction scenario through a plurality of skill controls displayed on a graphical user interface, which may correspond one-to-one with a plurality of skills possessed by the virtual character. For example, the basic skills possessed by different virtual characters may be the same, for example, taking a virtual sphere as a virtual basketball, the basic skills may be skills such as shooting, passing, getting a basket, getting a ball, etc., but not limited thereto, different character skills may exist for different virtual characters, and thus, when a player selects or manipulates different virtual characters, the number of skill controls displayed in the graphical user interface and the specific skills corresponding to each skill control may be different.
Referring to fig. 1, in step S101, a controlled virtual character is controlled to enter a heuristic behavior state in response to a first operation for the controlled virtual character.
For example, a plurality of first virtual characters and a plurality of second virtual characters may be included in the virtual interaction scene, the plurality of first virtual characters belonging to a first camp, the plurality of second virtual characters belonging to a second camp. Multiple virtual characters in the same camp can be controlled by one game account, different game accounts, or a combination of the game account and the AI (or NPC), which is not limited by the present application.
The controlled virtual character may be a first virtual character currently controlled by a first game account, which may refer to a game account registered on the terminal device, and which is a virtual character on the side of the ball holding attack that can be controlled by the terminal device.
In the embodiment of the present application, the first operation refers to an operation for controlling the controlled virtual character to enter the heuristic behavior state. For example, the first operation may be an operation performed through an external input device connected to the terminal device, and for a case where the terminal device has a touch screen, the first operation may include a touch operation for a target skill control, which may be a skill control displayed on a graphical user interface for triggering the controlled virtual character to enter a heuristic behavior state.
In an alternative example, the target skill control may be a basic skill that may be possessed by each of different virtual characters in the game, or may be a character skill that may be possessed only by a preset controlled virtual character, which is not limited by the present application.
In one case, the target skill control may be resident on a graphical user interface.
For example, the target skill control is displayed while the virtual interaction scene is loaded on the graphical user interface.
In another case, the target skill control is displayed on the graphical user interface after the controlled virtual character holds the ball.
For example, after the controlled avatar holds the virtual sphere (or the moment of receiving the virtual sphere), the target skill control is displayed on the graphical user interface. In addition, the target skill control may be displayed when the controlled virtual character holds the virtual sphere as the attack (or as the gatekeeper).
In one possible implementation, the first operation may be a long press operation for the target skill control, for example, at the moment when the controlled virtual character is connected to the virtual sphere, the controlled virtual character is controlled to enter the heuristic behavior state by the long press operation for the target skill control.
In step S102, in response to the second operation for the controlled virtual character, the controlled virtual character is controlled to perform a heuristic action in a state in which the controlled virtual character is in a heuristic behavior.
In the embodiment of the present application, the heuristic behavior state may refer to a state in which the virtual character is allowed to perform a heuristic action, that is, in a case where the controlled virtual character is in the heuristic behavior state, the controlled virtual character may perform the heuristic action, and in a case where the controlled virtual character is not in the heuristic behavior state, the controlled virtual character may not perform the heuristic action.
For the case where the first operation is a touch operation, the controlled avatar may be in a heuristic state for the duration of the touch operation, e.g., where the target skill control is pressed long and not loose hands, the controlled avatar may remain in a heuristic state. It should be understood that in the case where the first operation is an operation performed by the external input device, it may be a case where a pressing operation is performed on a target key on the external input device and the target key is not released, the controlled avatar is maintained in a heuristic state.
In addition, the controlled virtual character may enter a heuristic behavior state after the target skill control and/or the target button are triggered instantaneously, and the heuristic behavior state may be maintained after the target skill control or the target button is released, which is not limited in this aspect of the present application.
The process of controlling the controlled avatar to perform a heuristic action is described below in connection with fig. 2-4.
Fig. 2 shows a flowchart of the steps provided by an exemplary embodiment of the present application to demonstrate a first skill animation.
As shown in fig. 2, in step S201, based on a second operation for the controlled virtual character, an interaction location parameter corresponding to the controlled virtual character is acquired.
By way of example, the second operation may include an operation for controlling movement of the controlled virtual character in the virtual interactive scene, and the second operation may be an operation of a joystick on the external input device, or an operation of a movement control displayed on the graphical user interface (e.g., a toggle operation for the virtual joystick), which the present application is not limited to.
The process of determining the interaction location parameter will now be described with reference to fig. 3 taking the second operation as a toggle operation for a virtual rocker as an example.
Fig. 3 is a flowchart illustrating steps for determining interaction location parameters corresponding to a controlled virtual character according to an exemplary embodiment of the present application.
As shown in fig. 3, in step S2011, a manipulation position parameter for the controlled virtual character is determined based on the toggle position of the virtual rocker under the second operation.
In the embodiment of the present application, the manipulation position parameter may be used to characterize the manipulation direction for the controlled virtual character in the heuristic behavior state. For example, the manipulation position parameter may be characterized in terms of a vector, e.g., the manipulation position parameter includes a second vector pointing from a reference position of the virtual rocker to a second position to which the virtual rocker is toggled, which may refer to an original position in which the virtual rocker is not toggled.
In step S2012, based on the ball holding position of the controlled virtual character with respect to the target position in the virtual interactive scene, a ball holding position parameter corresponding to the controlled virtual character is determined.
In the embodiment of the application, the ball holding position parameter is used for representing the movement direction of the controlled virtual character in the virtual interaction scene under the heuristic behavior state. For example, the ball holding position parameter may be characterized in terms of a vector, e.g., the ball holding position parameter includes a first vector pointing from a first position of the controlled virtual character in the virtual interaction scene to a target position, which may refer to a pre-specified one of the positions in the virtual interaction scene, preferably a position where the virtual ball is able to obtain a score, e.g., a position where the virtual basket or virtual goal is located.
In step S2013, according to the manipulation position parameter and/or the ball holding position parameter, an interaction position parameter corresponding to the controlled virtual character is determined.
In the first embodiment, one of the manipulation position parameter and the ball holding position parameter may be used as the interaction position parameter.
For example, based on the second position to which the virtual rocker is toggled, the controlled virtual character is controlled to perform a heuristic action in the virtual interactive scene along a direction indicated by the second position, e.g., the controlled virtual character is controlled to step in the virtual interactive scene toward the direction in which the virtual rocker is toggled.
Or presetting a relation between a plurality of movement orientations and a plurality of action directions of the heuristic action, at this time, determining the action direction corresponding to the movement orientation according to the preset relation based on the movement orientation of the controlled virtual character in the virtual interaction scene, so as to control the controlled virtual character to execute the heuristic action towards the determined action direction by one step in the virtual interaction scene.
In a second embodiment, the interaction location parameters may be determined by both.
In one case, the virtual rocker is toggled in the heuristic behavior state.
In this case, the real-time angle value between the first vector and the second vector may be determined in response to the second operation (e.g., the dial operation on the virtual rocker is detected), and the angle interval in which the real-time angle value is located is determined as the interaction position parameter corresponding to the controlled virtual character.
Here, the angle value between the two vectors can be determined by using various existing methods, which will not be described in detail in the present application.
In the embodiment of the application, a plurality of angle intervals can be preset, the corresponding relation between the plurality of angle intervals and a plurality of action directions is determined, and after the real-time included angle value is determined, the angle interval in which the real-time included angle value falls is determined, so that the controlled virtual character is controlled to execute the probing action based on the action direction corresponding to the fallen angle interval.
By way of example, one circumference may be divided into 4 angular intervals, wherein the first angular interval is-45 ° to 45 °, the second angular interval is 45 ° to 135 °, the third angular interval is 135 ° to-135 °, and the fourth angular interval is-135 ° to-45 °. Here, the attribution of the upper limit value or the lower limit value of each angle section may be set according to the requirement, so that the plurality of angle sections can cover the complete circumferential angle.
Accordingly, the motion direction corresponding to the first angle section may be forward, the motion direction corresponding to the second angle section may be rightward, the motion direction corresponding to the third angle section may be backward, and the motion direction corresponding to the fourth angle section may be leftward.
In another case, the virtual rocker is not toggled in the heuristic behavior state.
In this case, if the toggle operation on the virtual rocker is not detected, the preset angle value may be determined as the interaction position parameter corresponding to the controlled virtual character, and the magnitude of the preset angle value may be set according to the requirement, which is not limited in the present application. Preferably, in the case that the interaction position parameter is the preset included angle value, the action performance of the controlled virtual character in the virtual interaction scene may include: the small-amplitude swing is performed in the virtual interaction scene, the amplitude and the direction of the swing can be default values, and the amplitude and/or the direction of the swing can be determined according to the ball holding position parameters, so that the application is not limited.
By the scheme, the virtual character can be supported to execute more directional trial step actions in the virtual interaction scene, so that game skills are enriched, the game progress is accelerated, and the consumption of computing resources is reduced.
Returning to fig. 2, in step S202, a first skill animation corresponding to the interaction location parameter is determined and presented.
In an embodiment of the present application, the controlled virtual character performing the heuristic action in the virtual interactive scene may be embodied as playing an animation for representing the controlled virtual character performing the heuristic action in the virtual interactive scene, that is, the first skill animation refers to an animation for controlling the controlled virtual character to perform the heuristic action once in the virtual interactive scene.
Here, the first skill animation corresponding to the different interaction location parameters is different, and for example, a plurality of first skill animations corresponding to the different interaction location parameters may be preconfigured, and the first skill animation corresponding to the currently determined interaction location parameter may be acquired from the preconfigured plurality of first skill animations in step S202.
In a preferred embodiment of the present application, the controlled virtual character may be further controlled to perform a plurality of heuristic actions based on the second operation when the controlled virtual character is in a heuristic behavior state.
For example, each first skill animation may be configured with a switching event for triggering a loop to another first skill animation, and when one first skill animation is played to the switching event and the controlled virtual character is still in the heuristic behavior state, the switching may be performed to the other skill animation, so that the switching is performed until the controlled virtual character exits the heuristic behavior state.
Here, the interactive position parameters for acquiring the first skill animation may be acquired in the following manner.
In one case, after the controlled virtual character enters the heuristic behavior state, when the heuristic action is executed for the first time, the current interaction position parameter can be directly obtained to determine the first skill animation corresponding to the current interaction position parameter.
In another case, after the controlled virtual character enters the heuristic behavior state and when the heuristic action is not executed for the first time, when the first skill animation corresponding to the last executed skill action is played to the switching event configured in the first skill animation, the interaction position parameter at the moment when the switching event is triggered is acquired.
For example, an animation editor may be configured in the terminal device to configure a switching event (e.g., switch Loop Next Anim event), for example, after the first skill animation is played to the switching event configured in the animation editor, according to the interaction position parameter at the time when the switching event is triggered, and determine another first skill animation corresponding to the interaction position parameter at the time, after the first skill animation is played, the first skill animation is switched to another first skill animation to continue playing, which is represented by the controlled virtual character automatically executing multiple probing actions in the virtual interaction scene, and the steps are repeated in this way until the player releases the long-pressed probing step skill button.
In a preferred embodiment of the present application, the function of performing the heuristic action may be split between the terminal device and the server, and the main logic is implemented by the client (terminal device), i.e. the control method described above may be performed on the terminal device logging in the first game account, for example, the step of obtaining the interaction location parameter may be performed on the terminal device, and the step of determining the first skill animation corresponding to the interaction location parameter. The server side is mainly responsible for replacing skill animation, so that each skill animation section is ensured to be independent, and independent skill effects can be configured according to different interaction position parameters.
Fig. 4 shows one of the flowcharts of the interaction between the terminal device and the server provided in the exemplary embodiment of the present application.
As shown in fig. 4, in step S2021, a first skill animation corresponding to the interaction location parameter is determined according to the behavioral animation logic stored on the terminal device.
In an embodiment of the present application, the behavior animation logic is configured to indicate a correspondence between a plurality of interaction location parameters and a plurality of first skill animations. For the example divided into four angle intervals, a first skill animation may be respectively manufactured according to the action directions corresponding to the angle intervals, for example, a first skill animation for controlling the virtual character to perform the heuristic action forwards is configured for the first angle interval, a first skill animation for controlling the virtual character to perform the heuristic action rightwards is configured for the second angle interval, a first skill animation for controlling the virtual character to perform the heuristic action backwards is configured for the third angle interval, and a first skill animation for controlling the virtual character to perform the heuristic action leftwards is configured for the fourth angle interval.
Preferably, the behavior animation logic may be constructed in a behavior tree manner, where each node of the behavior tree corresponds to an interaction location parameter, and when the controlled virtual character is in a heuristic behavior state, the behavior tree is entered in response to the obtained interaction location parameter to select a corresponding first skill animation to be played, and a jump is performed between the nodes of the behavior tree based on a switching event configured in the first skill animation.
In step S2022, the terminal device transmits a first request to the server.
After determining the first skill animation, the terminal device generates a first request and sends the first request to the server, wherein the first request is used for acquiring the first skill animation corresponding to the interaction position parameter. For example, the first request may carry an animation identifier, where the animation identifier is used to indicate a first skill animation corresponding to the current triggered heuristic action of the terminal device, or the first request may also carry an action identifier, where the action identifier is used to indicate an action direction of the current triggered heuristic action of the terminal device, where the application is not limited in this respect.
In step S2023, the server invokes the first skill animation indicated by the first request.
Illustratively, the server may store therein a skill animation of a plurality of skills possessed by each virtual character, the skill animation of each skill may be modified and replaced by an operation in the server, and individual skill effects may be configured for different skills. For example, the server stores first skill animations corresponding to the heuristic actions of forward, backward, leftward and rightward, and at this time, the server may determine, from the stored skill animations, the first skill animation indicated by the first request according to the animation identifier or the action identifier carried in the first request.
In step S2024, the server pushes the first skill animation indicated by the first request to the terminal device.
Here, a communication connection has been established between the server and the terminal device for data transmission, at which time the first skill animation may be transmitted to the terminal device via the established communication connection.
In step S2025, the terminal apparatus displays the first skill animation. At this point, the controlled virtual character appears on the graphical user interface to have performed a heuristic action in the first skill animation in the virtual interactive scene, such as stepping one step in a certain direction and returning to home.
Returning to fig. 1, in step S103, in response to the controlled virtual character exiting the heuristic behavior state, the controlled virtual character is controlled to execute the target skill in the virtual interaction scenario based on the action parameters of the heuristic action performed by the controlled virtual character.
Here, the controlled virtual character may be controlled to exit the heuristic behavior state in response to the third operation, or may be controlled to exit the specified game state in response to the end of the first operation for the controlled virtual character described above, e.g., at the end of a touch operation for the target skill control (release of a long press operation for the target skill control).
In a preferred embodiment of the present application, the preset first skill animation may further be configured with a connection point for triggering the target skill, where the preset first skill animation may include all or only part of the first skill animation, for example, the preset first skill animation may refer to an animation that the motion parameters of the heuristic motion meet the specified motion direction. The first skill animation for which the motion parameters satisfy the forward and backward motion directions is illustratively configured with the engagement points.
FIG. 5 illustrates a flowchart of the steps provided by an exemplary embodiment of the present application for controlling a skill object to perform a target skill in a virtual interactive scenario.
As shown in fig. 5, in step S301, the interaction location parameter corresponding to the controlled virtual character at the moment of exiting the heuristic behavior state is acquired, and the interaction location parameter is determined as a pre-input for the target skill.
In embodiments of the present application, a link is made between the target skill and the heuristic action, e.g., the target skill is triggered after the heuristic action is performed and is determined by the interaction location parameters. Illustratively, the interaction location parameter of the controlled avatar at the point of exit heuristic behavior state is taken as an input to the target skill.
In step S302, in response to detecting the configured engagement point in the first skill animation corresponding to the pre-input, the controlled virtual character is controlled to perform the target skill in the virtual interactive scene.
As described above, the different first skill animations are used to represent the heuristic motions to be executed in different motion directions, and playing the different first skill animations is equivalent to controlling the controlled virtual character to execute the heuristic motions multiple times in the virtual interaction scene, and if the first skill animation is configured with the engagement points, the first skill animation executing the heuristic motions is switched to playing the second skill animation used to represent the target skill.
Fig. 6 shows a second flowchart of interaction between a terminal device and a server according to an exemplary embodiment of the present application.
As shown in fig. 6, in step S3021, engagement points are detected in a first skill animation.
Here, the first skill animation is a first skill animation that is currently being played based on the interaction location parameters of the controlled virtual character.
In step S3022, the terminal device sends a second request to the server,
If the terminal device detects the engagement point from the first skill animation, a second request is generated, and if the terminal device does not detect the engagement point from the first skill animation, the target skill is not triggered.
Here, the second request is used for obtaining a second skill animation corresponding to the target skill, and the exemplary second request carries a prompt identifier, where the prompt identifier is used for indicating a motion parameter corresponding to a first skill animation that is pre-input, and the first skill animation is a first skill animation that is currently playing, or is a first skill animation that is played last when a heuristic behavior state is exited, where the motion parameter may indicate a motion direction of a heuristic motion in the first skill animation, and target skills triggered by different motion directions are different.
In step S3023, the server invokes a second skill animation indicated by the second request.
For example, the server may store a correspondence between different action directions and different second skill animations, where each second skill animation is used to characterize a corresponding target skill, i.e., the action directions are different, and then the target skill is different, and at this time, the server may invoke the second skill animation indicated by the prompt identifier in response to the second request. Here, the second skill animation refers to controlling the skill object to perform animation of the target skill in the virtual interactive scene.
In step S3024, the server pushes a second skill animation indicated by the second request to the terminal device.
For example, the second skill animation may be transmitted to the terminal device over a communication connection established between the server and the terminal device.
In step S3025, the terminal device presents a second skill animation. At this point, the target skill is engaged after the controlled virtual character has performed the heuristic action in the first skill animation in the virtual interactive scene as represented on the graphical user interface.
In the embodiment of the application, the target skill triggered gives more functionality to the probing step, so that the probing step has an attack threat and can become a conventional attack means. Illustratively, the target skills described above may include skills for gain, beneficial effects on the controlled virtual character's utilization of virtual sphere trigger scores.
In one case, the target skill is a skill that brings a gain to the attack of the own party.
In this case, the skill object may refer to a controlled virtual character, at this time, in response to playing a first skill animation corresponding to the pre-input to the engagement point configured in the first skill animation, the first skill animation is controlled to be interrupted, and the controlled virtual character is directly controlled to perform a skill action for enhancing the intensity of attack.
Here, the position of the engagement point configured in the first skill animation may be a position where the heuristic action is completed but the animation is not completed, so that the integrity of the heuristic action is ensured. That is, when the first skill animation is interrupted, the controlled virtual character performs a corresponding heuristic action in the virtual interactive scene.
For example, when the motion parameters of the heuristic motion of the first skill animation satisfy the first designated motion direction, the target skill is a skill for improving the attack strength of the controlled virtual character, and the first designated motion direction includes a backward heuristic step, where the target skill is shooting, and may be specifically expressed as: the controlled avatar is controlled to perform a heuristic action backwards and directly engage the shot.
In another case, the target skills are skills that bring reduced benefit to the defense of the enemy.
In this case, the skill object may refer to a preset defending character, which is one of the plurality of second virtual characters in the hostile camp that is in the best defending position, e.g., the closest second virtual character. At this time, in response to the first skill animation corresponding to the pre-input being played to the engagement point configured in the first skill animation, the controlled virtual character is controlled to release the target skill, so that the pre-set defending character executes a skill action for weakening the defending strength, wherein the first skill animation corresponding to the pre-input is continuously played when the pre-set defending character executes the skill action.
For example, when the motion parameters of the heuristic motion of the first skill animation satisfy the second designated motion direction, the target skill is a skill for weakening the defending strength of the preset defending role, and the second designated motion direction includes a forward heuristic step, where the target skill is a skill for controlling the preset defending role to be hard and straight, and may be specifically expressed as follows: the controlled avatar is controlled to perform a heuristic action forward and cause a hard straight effect on the pre-set defensive character (e.g., the pre-set defensive character is backed down by a small step).
In a preferred embodiment of the present application, the handling and performance of the heuristic steps are also optimized to support skill release in weak net situations. Illustratively, for weak network states, at least one break point is also configured at the end of the first skill animation that performs the heuristic action to ensure that the player's input is responded to. For example, the break points include a first break point and a second break point, the first break point being located before the second break point and the second break point being at the end of the first skill animation.
Fig. 7 shows a third flowchart of interaction between a terminal device and a server according to an exemplary embodiment of the present application.
As shown in fig. 7, in step S3031, a first break point is detected in a first skill animation.
For example, only the break point or only the engagement point may be configured in the first skill animation, and the break point and the engagement point may be configured at the same time, preferably, the break point is configured before the engagement point and after the heuristic is performed.
In step S3032, the terminal device sends a third request to the server.
If the terminal equipment detects the first break point from the first skill animation, a third request is generated, and if the terminal equipment does not detect the first break point from the first skill animation, the subsequent processing is not triggered.
Here, the third request carries an action prompt identifier and a breaking point prompt identifier, where the action prompt identifier is used to indicate that the action parameter (such as the action direction of the heuristic action) corresponding to the first skill animation corresponding to the pre-input corresponds to the breaking point prompt identifier is used to indicate the currently detected joining point (whether the joining point is the first breaking point or the second breaking point).
In step S3033, the server invokes a second skill animation indicated by the third request.
Here, the server may invoke the second skill animation indicated by the action prompt identifier in response to the third request, and for example, for a case where only the break point is configured in the first skill animation, the second skill animation may refer to an animation that controls the skill object to perform the target skill in the virtual interactive scene, and for a case where the break point and the break point are configured in the first skill animation, the target skill corresponding to the break point may include a break-through, the second skill animation corresponds to an animation that controls the controlled virtual character to perform the break-through in the virtual interactive scene.
In step S3034, the server pushes a second skill animation indicated by the third request to the terminal device.
For example, the second skill animation may be transmitted to the terminal device over a communication connection established between the server and the terminal device.
In step S3035, the terminal device detects whether a second skill animation pushed by the server is received.
If the second skill animation pushed by the server is received, step S3036 is executed: the terminal device displays a second skill animation.
Following the above example, a break in engagement may be presented on the graphical user interface as a controlled virtual character after performing a heuristic action in a first skill animation in the virtual interactive scene. For example, the direction of the breakthrough may be determined according to the direction of the motion of the heuristic motion, e.g. the breakthrough direction is identical or opposite to the direction of the motion of the heuristic motion. Or the breakthrough direction may be determined based on the interaction location parameter at the moment the first break point is detected, e.g. different angle intervals corresponding to different breakthrough directions.
If the second skill animation pushed by the server is not received, step S3037 is executed: a second break point is detected in the first skill animation.
In step S3038, the terminal device sends a fourth request to the server.
If the terminal equipment detects the second break point from the first skill animation, a fourth request is generated, and if the terminal equipment does not detect the second break point from the first skill animation, no subsequent processing is executed. Illustratively, the fourth request carries an action prompt identifier and a break point prompt identifier.
In step S3039, the server invokes a third skill animation indicated by the fourth request.
Here, the server may invoke a third skill animation indicated by the action prompt identifier and the break point prompt identifier in response to the fourth request, the third skill animation directing animation of the skill object sequentially performing the heuristic action and the target skill action in the virtual interactive scene.
For example, the server may determine, in response to the fourth request, the second skill animation indicated by the action prompt identifier, and splice the first skill animation in step S3031 with the second skill animation in step S3033 based on the current break point indicated by the break point prompt identifier as the second break point, so as to obtain the third skill animation. In addition, a third skill animation corresponding to the different action prompt identifier may be stored in the server in advance to be invoked.
In step S3040, the server pushes the third skill animation indicated by the fourth request to the terminal device.
For example, the third skill animation may be transmitted to the terminal device via a communication connection established between the server and the terminal device.
In step S3041, the terminal apparatus displays a third skill animation.
Taking goal skills as breakthrough as an example, the specific expressions may be: and controlling the controlled virtual role to repeatedly execute the heuristic action in the first skill animation in the virtual interaction scene, and linking the breakthrough. Here, the manner of determining the breakthrough direction is the same as that of determining the breakthrough direction in step S3036, and the present application will not be repeated.
Based on the control scheme of the virtual character in the embodiment of the application, the trial step operation process in a single rocker control mode is optimized, so that a player is easier to get on hand, the actual technical characteristics are restored in action, and the manifestations of trial steps and subsequent breakthroughs are enriched. Meanwhile, more functionalities are given to the trial steps, so that the game playing method is enriched, and the game interest is improved.
Based on the same application conception, the embodiment of the present application further provides a virtual character control device corresponding to the method provided in the foregoing embodiment, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the virtual character control method in the foregoing embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Fig. 8 is a schematic structural diagram of a virtual character control device according to an exemplary embodiment of the present application. Providing a virtual interaction scene for enabling virtual characters of different campaigns to interact on a graphical user interface of a terminal device, wherein the virtual interaction scene has virtual spheres competing for the virtual characters of the different campaigns, and as shown in fig. 8, a control device 200 of the virtual characters comprises:
The state control module 210 is used for controlling the controlled virtual character to enter a heuristic behavior state in response to a first operation aiming at the controlled virtual character, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and performs ball holding interaction on the virtual sphere;
a first skill control module 220 that, in response to a second operation for the controlled virtual character, controls the controlled virtual character to perform a heuristic action while the controlled virtual character is in the heuristic state;
The second skill control module 230, in response to the controlled virtual character exiting the heuristic behavior state, controls the controlled virtual character to perform a target skill in the virtual interaction scenario based on an action parameter of a heuristic action performed by the controlled virtual character.
In one possible embodiment of the present application, the first operation includes a touch operation for a target skill control, wherein the controlled virtual character is in the heuristic behavior state for the duration of the touch operation, and exits the heuristic behavior state at the end of the touch operation.
In one possible implementation of the present application, the action parameter satisfies a first specified action direction, the target skill is a skill for improving an attack intensity of the controlled virtual character, or the action parameter satisfies a second specified action direction, the target skill is a skill for weakening a defending intensity of a preset defending character, and the preset defending character is one of a plurality of second virtual characters in a hostile array, which is in an optimal defending position.
In one possible embodiment of the present application, the first skill control module 220 is further configured to: based on the second operation, acquiring interaction position parameters corresponding to the controlled virtual roles; and determining and displaying a first skill animation corresponding to the interaction position parameter, wherein the first skill animation refers to an animation for controlling the controlled virtual character to execute a heuristic action in the virtual interaction scene.
In one possible embodiment of the present application, the first skill animations corresponding to different interaction location parameters are different, and each first skill animation is configured with a switching event for triggering a loop to another first skill animation, where the first skill control module 220 is further configured to: for the first time of executing skill action, directly acquiring interaction position parameters, and for the non-first time of executing skill action, acquiring the interaction position parameters at the moment when a switching event configured in the first skill animation is triggered when the first skill animation corresponding to the last time of executing skill action is played.
In a possible embodiment of the present application, the second operation includes a toggle operation for a virtual rocker, wherein the first skill control module 220 is further configured to: determining a manipulation position parameter for the controlled virtual character based on a toggle position of the virtual rocker under a second operation; based on the ball holding position of the controlled virtual character relative to the target position in the virtual interaction scene, determining a ball holding position parameter corresponding to the controlled virtual character; and determining interaction position parameters corresponding to the controlled virtual roles according to the control position parameters and/or the ball holding position parameters.
In one possible embodiment of the present application, the ball holding position parameter includes a first vector pointing from a first position where the controlled virtual character is located in the virtual interactive scene to the target position, and the manipulation position parameter includes a second vector pointing from a reference position of the virtual rocker to a second position where the virtual rocker is toggled.
In one possible implementation of the present application, the first skill control module 220 determines the interaction location parameters corresponding to the controlled virtual character by: in response to detecting a toggle operation of the virtual rocker, determining a real-time included angle value between the first vector and the second vector, and determining an angle interval in which the real-time included angle value is located as an interaction position parameter corresponding to the controlled virtual character; and in response to the fact that the toggle operation of the virtual rocker is not detected, determining a preset included angle value as an interaction position parameter corresponding to the controlled virtual character.
In a possible embodiment of the application, the control means are provided on a terminal device logging in the first game account, wherein the first skill control module 220 is further configured to: generating a first request for acquiring a first skill animation corresponding to the interaction position parameters according to behavior animation logic stored on the terminal equipment, and sending the first request to a server, wherein the behavior animation logic is used for indicating the corresponding relation between a plurality of interaction position parameters and a plurality of first skill animations; and receiving the first skill animation indicated by the first request and pushed by the server, and displaying the first skill animation.
In one possible implementation manner of the present application, the preset first skill animation is further configured with a joint point for triggering the target skill, where the preset first skill animation refers to an animation that the motion parameter of the heuristic motion meets the specified motion direction, and the second skill control module 230 is further configured to: acquiring interaction position parameters corresponding to the controlled virtual roles at the moment of exiting the heuristic behavior state, and determining the interaction position parameters as pre-input aiming at target skills; and controlling the controlled virtual character to execute target skills in the virtual interaction scene in response to detecting the configured engagement points in the first skill animation corresponding to the pre-input.
In one possible embodiment of the present application, the second skill control module 230 controls the controlled virtual character to perform a target skill in the virtual interaction scenario by: responding to the detection of the engagement point, generating a second request for acquiring a second skill animation corresponding to the target skill, and sending the second request to a server, wherein the second request carries a prompt identifier, and the prompt identifier is used for indicating action parameters corresponding to the first skill animation corresponding to the pre-input; and receiving and displaying a second skill animation indicated by the prompt identifier pushed by the server, wherein the second skill animation is used for controlling the controlled virtual character to execute the animation of the target skill in the virtual interaction scene.
In one possible embodiment of the present application, the second skill control module 230 is further configured to: and responding to the first skill animation corresponding to the pre-input, playing to the joint point configured in the first skill animation, controlling the interruption of the first skill animation, and directly controlling the controlled virtual character to execute the skill action for improving the attack intensity.
In one possible embodiment of the present application, the second skill control module 230 is further configured to: and responding to the first skill animation corresponding to the pre-input to be played to the joint point configured in the first skill animation, and controlling the preset defending role to execute the skill action for weakening the defending strength, wherein the first skill animation corresponding to the pre-input is continuously played when the preset defending role executes the skill action.
In one possible implementation manner of the present application, a breaking point is further configured in the first skill animation, where the breaking point includes a first breaking point and a second breaking point, the first breaking point is located before the second breaking point, and the second breaking point is located at the end of the first skill animation, where the second skill control module 230 controls the skill object to execute the target skill in the virtual interaction scene by: responding to the detection of a first break point, generating a third request, and sending the third request to a server, wherein the third request carries an action prompt identifier and a break point prompt identifier, the action prompt identifier is used for indicating action parameters corresponding to a first skill animation corresponding to the pre-input, and the break point prompt identifier is used for indicating a currently detected joint point; detecting whether a second skill animation pushed by the server is received or not; if a second skill animation pushed by the server is received, displaying the second skill animation, wherein the second skill animation refers to an animation for controlling the controlled virtual character to execute target skill in the virtual interaction scene; if the second skill animation pushed by the server is not received, responding to the detection of a second break point, generating a fourth request, and sending the fourth request to the server, wherein the fourth request carries an action prompt identifier and a break point prompt identifier; and receiving and displaying a third skill animation indicated by the fourth request and pushed by the server, wherein the third skill animation is used for controlling the controlled virtual character to sequentially execute a heuristic action and a target skill animation in the virtual interaction scene.
Based on the device, the operation of a user can be simplified, so that the efficiency of man-machine interaction is improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. As shown in fig. 9, the electronic device 300 includes a processor 310, a memory 320, and a bus 330.
The memory 320 stores machine-readable instructions executable by the processor 310, and when the electronic device 300 is running, the processor 310 communicates with the memory 320 through the bus 330, and when the machine-readable instructions are executed by the processor 310, the steps of the virtual character control method in any of the above embodiments may be executed, as follows:
Providing a virtual interaction scene for enabling virtual roles of different camps to interact on a graphical user interface of the terminal equipment, wherein the virtual interaction scene is provided with virtual spheres contending for the virtual roles of the different camps; responding to a first operation aiming at a controlled virtual character, and controlling the controlled virtual character to enter a heuristic behavior state, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere; controlling the controlled virtual character to execute a heuristic action in response to a second operation for the controlled virtual character when the controlled virtual character is in the heuristic behavior state; and responding to the controlled virtual character to exit the heuristic behavior state, and controlling the controlled virtual character to execute target skills in the virtual interaction scene based on action parameters of heuristic actions executed by the controlled virtual character.
Based on the electronic equipment, the operation of a user can be simplified, so that the efficiency of man-machine interaction is improved.
The embodiment of the present application also provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor can perform the steps of the method for controlling a virtual character in any of the foregoing embodiments, specifically as follows:
Providing a virtual interaction scene for enabling virtual roles of different camps to interact on a graphical user interface of the terminal equipment, wherein the virtual interaction scene is provided with virtual spheres contending for the virtual roles of the different camps; responding to a first operation aiming at a controlled virtual character, and controlling the controlled virtual character to enter a heuristic behavior state, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere; controlling the controlled virtual character to execute a heuristic action in response to a second operation for the controlled virtual character when the controlled virtual character is in the heuristic behavior state; and responding to the controlled virtual character to exit the heuristic behavior state, and controlling the controlled virtual character to execute target skills in the virtual interaction scene based on action parameters of heuristic actions executed by the controlled virtual character.
Based on the computer readable storage medium, the operation of a user can be simplified, so that the efficiency of man-machine interaction is improved.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (17)

1. A method for controlling virtual characters, characterized in that a virtual interaction scene for enabling virtual characters of different campaigns to interact is provided on a graphical user interface of a terminal device, and virtual spheres with the virtual characters of different campaigns contend in the virtual interaction scene, the method comprising:
Responding to a first operation aiming at a controlled virtual character, and controlling the controlled virtual character to enter a heuristic behavior state, wherein the controlled virtual character is a first virtual character which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere;
Controlling the controlled virtual character to execute a heuristic action in response to a second operation for the controlled virtual character when the controlled virtual character is in the heuristic behavior state;
And responding to the controlled virtual character to exit the heuristic behavior state, and controlling the controlled virtual character to execute target skills in the virtual interaction scene based on action parameters of heuristic actions executed by the controlled virtual character.
2. The method of claim 1, wherein the first operation comprises a touch operation for a target skill control, wherein the controlled virtual character is in the heuristic behavior state for the duration of the touch operation, and wherein the controlled virtual character exits the heuristic behavior state at the end of the touch operation.
3. The method of claim 1, wherein the motion parameter satisfies a first specified motion direction, the target skill is a skill for improving the intensity of attack of the controlled virtual character,
Or the action parameter meets a second designated action direction, the target skill is a skill for weakening the defending strength of a preset defending role, and the preset defending role is one second virtual role in the optimal defending position among a plurality of second virtual roles in hostile camping.
4. A method according to claim 1 or 3, wherein said controlling the controlled avatar to perform a heuristic action comprises:
Based on the second operation, acquiring interaction position parameters corresponding to the controlled virtual roles;
and determining and displaying a first skill animation corresponding to the interaction position parameter, wherein the first skill animation refers to an animation for controlling the controlled virtual character to execute a heuristic action in the virtual interaction scene.
5. The method of claim 4, wherein the first skill animations corresponding to different interaction location parameters are different, each first skill animation having configured therein a switch event for triggering a loop to another first skill animation,
The obtaining the interaction position parameter corresponding to the controlled virtual role includes:
aiming at the first time of skill action execution, directly acquiring the interaction position parameters,
Aiming at the non-first execution of the skill action, when the first skill animation corresponding to the last execution of the skill action is played to the switching event configured in the first skill animation, the interaction position parameter of the moment when the switching event is triggered is acquired.
6. The method of claim 4, wherein the second operation comprises a toggle operation for a virtual rocker,
The obtaining the interaction position parameter corresponding to the controlled virtual role includes:
determining a manipulation position parameter for the controlled virtual character based on a toggle position of the virtual rocker under a second operation;
based on the ball holding position of the controlled virtual character relative to the target position in the virtual interaction scene, determining a ball holding position parameter corresponding to the controlled virtual character;
and determining interaction position parameters corresponding to the controlled virtual roles according to the control position parameters and/or the ball holding position parameters.
7. The method of claim 6, wherein the ball holding position parameter comprises a first vector pointing from a first position in the virtual interaction scene at which the controlled virtual character is located to the target position,
The manipulation position parameter comprises a second vector pointing from a reference position of the virtual rocker to a second position to which the virtual rocker is toggled.
8. The method of claim 7, wherein the interaction location parameters corresponding to the controlled virtual character are determined by:
In response to detecting a toggle operation of the virtual rocker, determining a real-time included angle value between the first vector and the second vector, and determining an angle interval in which the real-time included angle value is located as an interaction position parameter corresponding to the controlled virtual character;
And in response to the fact that the toggle operation of the virtual rocker is not detected, determining a preset included angle value as an interaction position parameter corresponding to the controlled virtual character.
9. The method of claim 4, wherein the controlling means is performed on a terminal device that logs into the first gaming account,
Wherein the determining and displaying a first skill animation corresponding to the interaction location parameter comprises:
Generating a first request for acquiring a first skill animation corresponding to the interaction position parameters according to behavior animation logic stored on the terminal equipment, and sending the first request to a server, wherein the behavior animation logic is used for indicating the corresponding relation between a plurality of interaction position parameters and a plurality of first skill animations;
and receiving the first skill animation indicated by the first request and pushed by the server, and displaying the first skill animation.
10. The method of claim 4, wherein the predetermined first skill animation further comprises an engagement point for triggering a target skill, wherein the predetermined first skill animation is an animation in which the motion parameters of the heuristic motion satisfy a specified motion direction,
Wherein the controlling the controlled virtual character to perform a target skill in the virtual interaction scene comprises:
Acquiring interaction position parameters corresponding to the controlled virtual roles at the moment of exiting the heuristic behavior state, and determining the interaction position parameters as pre-input aiming at target skills;
And controlling the controlled virtual character to execute target skills in the virtual interaction scene in response to detecting the configured engagement points in the first skill animation corresponding to the pre-input.
11. The method of claim 10, wherein the controlled virtual character is controlled to perform a target skill in the virtual interaction scene by:
Responding to the detection of the engagement point, generating a second request for acquiring a second skill animation corresponding to the target skill, and sending the second request to a server, wherein the second request carries a prompt identifier, and the prompt identifier is used for indicating action parameters corresponding to the first skill animation corresponding to the pre-input;
and receiving and displaying a second skill animation indicated by the prompt identifier pushed by the server, wherein the second skill animation is used for controlling the controlled virtual character to execute the animation of the target skill in the virtual interaction scene.
12. The method of claim 11, wherein the controlling the controlled virtual character to perform a target skill in the virtual interaction scene comprises:
And responding to the first skill animation corresponding to the pre-input, playing to the joint point configured in the first skill animation, controlling the interruption of the first skill animation, and directly controlling the controlled virtual character to execute the skill action for improving the attack intensity.
13. The method of claim 11, wherein the controlling the controlled virtual character to perform a target skill in the virtual interaction scene comprises:
And responding to the first skill animation corresponding to the pre-input to be played to the joint point configured in the first skill animation, and controlling the preset defending role to execute the skill action for weakening the defending strength, wherein the first skill animation corresponding to the pre-input is continuously played when the preset defending role executes the skill action.
14. The method of claim 4 wherein a break point is further configured in the first skill animation, the break point comprising a first break point and a second break point, the first break point being before the second break point and the second break point being at the end of the first skill animation,
Wherein the controlled virtual character is controlled to perform a target skill in the virtual interaction scene by:
Responding to the detection of a first break point, generating a third request, and sending the third request to a server, wherein the third request carries an action prompt identifier and a break point prompt identifier, the action prompt identifier is used for indicating action parameters corresponding to a first skill animation corresponding to the pre-input, and the break point prompt identifier is used for indicating a currently detected joint point;
Detecting whether a second skill animation pushed by the server is received or not;
If a second skill animation pushed by the server is received, displaying the second skill animation, wherein the second skill animation refers to an animation for controlling the controlled virtual character to execute target skill in the virtual interaction scene;
If the second skill animation pushed by the server is not received, responding to the detection of a second break point, generating a fourth request, and sending the fourth request to the server, wherein the fourth request carries an action prompt identifier and a break point prompt identifier;
and receiving and displaying a third skill animation indicated by the fourth request and pushed by the server, wherein the third skill animation is used for controlling the controlled virtual character to sequentially execute a heuristic action and a target skill animation in the virtual interaction scene.
15. A control apparatus for virtual characters, characterized in that a virtual interaction scene for causing virtual characters of different campaigns to interact is provided on a graphical user interface of a terminal device, the virtual interaction scene having virtual spheres of different campaigns competing for the virtual characters, the apparatus comprising:
the state control module is used for responding to a first operation aiming at a controlled virtual role, controlling the controlled virtual role to enter a heuristic behavior state, wherein the controlled virtual role is a first virtual role which is currently controlled by a first game account and carries out ball holding interaction on the virtual sphere;
a first skill control module that, in response to a second operation for the controlled virtual character, controls the controlled virtual character to perform a heuristic action while the controlled virtual character is in the heuristic behavior state;
And the second skill control module is used for controlling the controlled virtual role to execute target skills in the virtual interaction scene based on the action parameters of the heuristic action executed by the controlled virtual role in response to the controlled virtual role exiting the heuristic action state.
16. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1 to 14.
17. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 14.
CN202410337352.XA 2024-03-22 2024-03-22 Virtual character control method and device, electronic equipment and storage medium Pending CN118059465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410337352.XA CN118059465A (en) 2024-03-22 2024-03-22 Virtual character control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410337352.XA CN118059465A (en) 2024-03-22 2024-03-22 Virtual character control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118059465A true CN118059465A (en) 2024-05-24

Family

ID=91105652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410337352.XA Pending CN118059465A (en) 2024-03-22 2024-03-22 Virtual character control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118059465A (en)

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
KR102397507B1 (en) Automated player control takeover in a video game
US7843455B2 (en) Interactive animation
CN110548288B (en) Virtual object hit prompting method and device, terminal and storage medium
JP7451563B2 (en) Virtual character control method, computer equipment, computer program, and virtual character control device
CN113398601B (en) Information transmission method, information transmission device, computer-readable medium, and apparatus
CN114377396A (en) Game data processing method and device, electronic equipment and storage medium
CN112691366B (en) Virtual prop display method, device, equipment and medium
KR20220157938A (en) Method and apparatus, terminal and medium for transmitting messages in a multiplayer online combat program
CN111589121A (en) Information display method and device, storage medium and electronic device
JP2005095403A (en) Method for game processing, game apparatus, game program, and storage medium for the game program
CN114510184B (en) Target locking method and device, electronic equipment and readable storage medium
WO2024098628A1 (en) Game interaction method and apparatus, terminal device, and computer-readable storage medium
JP5932735B2 (en) GAME DEVICE, GAME SYSTEM, AND PROGRAM
CN118059465A (en) Virtual character control method and device, electronic equipment and storage medium
CN116920374A (en) Virtual object display method and device, storage medium and electronic equipment
KR20230042116A (en) Virtual object control method and apparatus, electronic device, storage medium and computer program product
CN114356097A (en) Method, apparatus, device, medium, and program product for processing vibration feedback of virtual scene
WO2023231557A1 (en) Interaction method for virtual objects, apparatus for virtual objects, and device, storage medium and program product
WO2024027292A1 (en) Interaction method and apparatus in virtual scene, electronic device, computer-readable storage medium, and computer program product
WO2024125163A1 (en) Character interaction method and apparatus based on virtual world, and device and medium
WO2024041142A1 (en) Interaction method and apparatus based on pickupable item, electronic device, computer readable medium, and computer program product
CN117482517A (en) Information processing method and device in game, electronic equipment and readable storage medium
CN117883774A (en) Control method and device for virtual carrier
CN115970271A (en) Game action execution method, game action execution device, storage medium, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination