CN115317908A - Skill display method and device, storage medium and computer equipment - Google Patents

Skill display method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN115317908A
CN115317908A CN202210975578.3A CN202210975578A CN115317908A CN 115317908 A CN115317908 A CN 115317908A CN 202210975578 A CN202210975578 A CN 202210975578A CN 115317908 A CN115317908 A CN 115317908A
Authority
CN
China
Prior art keywords
attack
virtual character
animation
controlled
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210975578.3A
Other languages
Chinese (zh)
Inventor
胡佳胜
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210975578.3A priority Critical patent/CN115317908A/en
Publication of CN115317908A publication Critical patent/CN115317908A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The embodiment of the application discloses a skill display method, a skill display device, a storage medium and computer equipment. The method comprises the following steps: the graphical user interface comprises at least a part of game scene, a controlled virtual character in the game scene and at least one attack control, wherein the attack control is used for responding to a first operation to control the controlled virtual character to release a first attack behavior. Generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation of the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset enrollment rule. And displaying a first animation of the first attack behavior released by the first virtual character to the second virtual character, and displaying a second animation of the second virtual character which performs a return attack based on a preset recruitment rule so as to simulate the controlled virtual character to release the first attack behavior to an attack object through an attack control.

Description

Skill display method and device, storage medium and computer equipment
Technical Field
The application relates to the technical field of games, in particular to a skill display method, a skill display device, a skill display storage medium and computer equipment.
Background
In some game scenarios, a player-controlled character may need to release skills to fight an enemy, who may also release corresponding skills to counter-hit. However, when the player does not know the skill release mechanism of the character to be controlled and does not know the skill release mechanism of the enemy, the player-controlled character is often killed by the enemy.
Disclosure of Invention
The embodiment of the application provides a skill display method, a skill display device, a storage medium and electronic equipment. The skill display method can display the skill of the role in the game.
In a first aspect, an embodiment of the present application provides a skill display method, in which a terminal device provides a graphical user interface, a content displayed by the graphical user interface at least includes a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene; the method comprises the following steps:
generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation on the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule;
and controlling to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character performing a back attack based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to an attack object through an attack control.
In a second aspect, an embodiment of the present application provides a skill display apparatus, where a terminal device provides a graphical user interface, where content displayed by the graphical user interface at least includes a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene; the device comprises:
the generating module is used for responding to a second operation aiming at the attack control and generating a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule;
and the control module is used for controlling a first animation which shows that the first virtual character releases the first attack behavior to the second virtual character on the graphical user interface, and a second animation which shows that the second virtual character performs the back attack based on a preset invitation rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a plurality of instructions are stored in the computer-readable storage medium, and the instructions are suitable for being loaded by a processor to perform steps in a skill demonstration method provided in an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps in the skill demonstration method provided in the embodiment of the present application.
In the embodiment of the application, a terminal device provides a graphical user interface, content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene. The terminal device can also respond to a second operation aiming at the attack control, and generate a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule. And finally, the terminal equipment controls to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character returning based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control. Thereby showing the skills of different characters in the game and helping the user to know the skill release mechanism of different characters.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system diagram of a skill demonstration apparatus provided in an embodiment of the present application.
Fig. 2 is a first flowchart of a skill demonstration method according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a second process of the skill demonstration method provided in the embodiment of the present application.
Fig. 4 is a schematic diagram of a first scenario of skill demonstration provided by an embodiment of the present application.
Fig. 5 is a schematic diagram of a second scenario of skill demonstration provided in the embodiment of the present application.
Fig. 6 is a schematic diagram of a third scenario of skill demonstration provided by an embodiment of the present application.
Fig. 7 is a schematic diagram of a fourth scenario of skill demonstration provided in the embodiment of the present application.
Fig. 8 is a schematic structural diagram of a skill demonstration device provided in an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a skill display method, a skill display device, a storage medium and electronic equipment. Specifically, the skill demonstration method in the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal can be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and the terminal can also include a client, which can be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
For example, when the skill exposure method is operated on the terminal, the terminal device stores a game application program and is used for presenting a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
For example, when the skill demonstration method runs on a server, the skill demonstration method can be a cloud game. Cloud gaming refers to a gaming regime based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and the running of the skill display method are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting the game screen, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for performing game data processing is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
Referring to fig. 1, fig. 1 is a schematic diagram of a system for providing a skill demonstration apparatus according to an embodiment of the present application. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. The terminal 1000 held by the user can be connected to servers of different games through the network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining user input through touch or slide operations performed at multiple points on one or more touch sensitive screens, and terminal 1000 can also obtain control information input by a user through a keyboard or joystick. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000 and through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, and so on. In addition, different terminals 1000 may be connected to other terminals or a server using their own bluetooth network or hotspot network. For example, a plurality of users may be online through different terminals 1000 to be connected and synchronized with each other through a suitable network to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 when different users play the multiplayer game online.
The embodiment of the application provides a skill display method, which can be executed by a terminal or a server, or can be executed by the terminal and the server together. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed at a plurality of points on the screen at the same time. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role playing game, a strategy game, a sports game, an educational game, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, and the like, may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of the one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, to implement a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that the game player uses to achieve the goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.
In the related art, when a player plays a new game, a character controlled by the player needs to release skills to fight an enemy, and the enemy also releases the corresponding skills to counterattack. However, when the player does not know the skill release mechanism of the character controlled, and does not know the skill release mechanism of the enemy, the player-controlled character is often killed by the enemy. This results in a poor game experience for the player.
For example, some computer virtual characters (e.g., BOSSs) are provided in some games, and when a player manipulates a virtual character and the computer virtual character fights, the computer virtual characters may attack the virtual character manipulated by the player according to preset enrollment rules, for example, the computer virtual character may determine a corresponding enrollment formula in the preset enrollment rules to attack the virtual character manipulated by the player according to the walking, the enrollment, and the like of the virtual character manipulated by the player.
When the player does not know the calling and calling rules of the computer virtual character, the virtual character controlled by the player is easily killed by the computer virtual character, and finally the death frequency of the virtual character controlled by the player is too high, so that the game experience is reduced.
In order to enable a player to know skill release mechanisms of different roles in a game in time, embodiments of the application provide a skill display method, device, storage medium and electronic device. The skill display method can display skills of different roles in the game and help users to know skill release mechanisms of the different roles.
Referring to fig. 2, fig. 2 is a first flowchart of a skill demonstration method according to an embodiment of the present disclosure.
In the embodiment of the application, the skill display method can be applied to a computer device, such as various terminal devices, and a graphical user interface is provided by the terminal devices, content displayed by the graphical user interface at least includes a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene.
For example, in some game scenes including an attack object, the attack object may be a computer virtual character operated by a preset program, a player may fight with the attack object by operating a controlled virtual character, in the fighting process, the player may click, touch, and the like on an attack control to trigger a first operation, and a terminal device responds to the first operation to control the controlled virtual character to perform a first attack behavior on the attack object, such as releasing skill or common attack, and the like.
That is, the player can perform the first operation on the attack control according to the decision of the player, so as to control the controlled virtual character to directly attack the attack object.
In some embodiments, when the player does not know the preset enrollment rules of some attack objects, if the player tries to manipulate the controlled virtual character to attack the attack object, the attack object clicks back on the controlled virtual character, which may directly kill the controlled virtual character, so that the player has to fight with the attack object again.
When the player encounters some attack object which is not well known, the player can perform a second operation on the attack control so as to know the calling rules of the attack object and the attack mode of the controlled virtual character on the attack object.
The method specifically comprises the following steps:
110. generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation on the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule.
The player can perform long-time pressing, dragging and the like on the attack control to trigger a second operation, the second operation is different from the first operation, the first operation can be understood as that the player directly controls the controlled virtual character to perform a first attack action on the attack object, for example, the first operation can be triggered by the player by clicking the attack control.
In the game scene, the computer device may generate, in the graphical user interface, a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object based on the attack object specified by the second operation.
For example, the player may trigger a second operation and designate an attack object in the game scene by long-pressing the attack control, and the computer device generates a first virtual character simulating the controlled virtual character and a second virtual character simulating the attack object in the graphical user interface according to the second operation.
In some embodiments, the trigger factor of the player for the first operation and the second operation of the attack control can be various, and the above manner of triggering the first operation and the second operation is only an example and is not a limitation to the present application.
Specifically, the computer device may obtain a pressing duration and/or a pressing pressure value of the attack control; and if the pressing time length is greater than the first preset time length and/or the pressing pressure value is greater than the first preset pressure value, triggering a second operation. And if the pressing time length is less than or equal to a first preset time length and/or the pressing pressure value is less than or equal to a first preset pressure value, triggering a first operation.
For example, the computer device may obtain a pressing duration when the attack control is pressed, and trigger the second operation if the pressing duration is greater than a first preset duration. And if the pressing time length is less than or equal to a first preset time length, triggering a first operation.
For example, the computer device may obtain a pressing pressure value when the attack control is pressed, and trigger the second operation if the pressing pressure value is greater than a first preset pressure value. And if the pressing value is less than or equal to the first preset pressure value, triggering a first operation.
For another example, the computer device may obtain a pressing time length when the attack control is pressed, and obtain a pressing pressure value when the attack control is pressed. And if the pressing time length is longer than the first preset time length and the pressing pressure value is larger than the first preset pressure value, triggering a second operation. And if the pressing time length is less than or equal to a first preset time length and the pressing pressure value is less than or equal to a first preset pressure value, triggering a first operation.
In some embodiments, the computer device may also respond to the second operation in other manners, for example, when the user has a touch operation on the attack control, the computer device acquires the voice of the user, then analyzes the voice, and responds to the second trigger operation when text information corresponding to the second trigger operation is included in the analysis result. For example, if the user says "skill show," the second triggering operation may trigger the second operation according to the voice "skill show.
It should be noted that the above is only an example of a way of triggering a portion of the first operation or the second operation, and should not be considered as limiting the present application.
After the computer device responds to the second operation, the computer device may generate a first virtual character independent of the controlled character and a second virtual character independent of the attack object at the current game interface.
In some embodiments, the computer device may determine first display parameters corresponding to the first virtual character, present the first virtual character based on the first display parameters, the first display parameters being different from the display parameters corresponding to the controlled virtual character; and determining a second display parameter corresponding to the second virtual role, and displaying the second virtual role based on the second display parameter, wherein the second display parameter is different from the display parameter corresponding to the attack object.
Wherein the first virtual character and the second virtual character are two different characters, and the first virtual character and the second virtual character can be visually distinguished. The first display parameters of the first avatar are different from the display parameters of the controlled avatar, and the first avatar is visually distinguishable from the controlled avatar. The second display parameters of the second virtual character are different from the display parameters of the attack object, and the second virtual character can be visually distinguished from the attack object.
The display parameters respectively corresponding to the first virtual character, the second virtual character, the controlled virtual character and the attack object can include various parameters such as transparency, color, contrast, brightness and the like.
In some implementations, during generation of the first virtual character and the second virtual character by the computer device, the computer device can determine first color shade information for the controlled virtual character, determine a first transparency of the first virtual character based on the first color shade information, and generate the first virtual character based on the first transparency.
The computer device may determine second color shade information of the attack object, determine a second transparency of the second virtual character according to the second color shade information, and generate the second virtual character according to the second transparency.
For example, a quantization range may be set for different shades of color of the controlled avatar, such as 0-100. A first transparency range, such as 0 to 100, may also be set. Each quantization value in the quantization range has a mapping value within a first transparency range. For example, when the quantized value corresponding to the shade of the color of the controlled virtual character is determined, the corresponding mapping value may be determined within a first transparency range, the mapping value may be determined as a first transparency corresponding to the first virtual character, and then the first virtual character may be generated according to the first transparency.
For example, a quantization range may be set for different shades of the attack object, for example, the quantization range is 0 to 100. A second transparency range, such as 0 to 50, may also be provided. Each quantization value in the quantization range of the attack object has a mapping value within the second transparency range. For example, when the quantized value corresponding to the shade of the color of the attack object is determined, the corresponding mapping value may be determined within the second transparency range, the mapping value may be determined as the second transparency corresponding to the second virtual character, and then the second virtual character may be generated according to the second transparency.
Wherein the first avatar is lighter in color, or semi-transparent, relative to the controlled avatar, the first avatar being observable to the human eye. The second avatar may be lighter in color, or liquid transparent, relative to the attack object, and the second avatar may be observed by the human eye.
120. And controlling to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character performing a back attack based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to an attack object through an attack control.
In some embodiments, prior to displaying the first avatar and the second avatar in the graphical user interface, the computer device may also determine a blank area in the graphical user interface and then generate to display a frame within the blank area.
The computer device then generates a first virtual character and a second virtual character within the display frame. The blank area does not shield any control of an attack object, a controlled virtual character and a graphical user interface, and the first virtual character and the second virtual character in the display frame do not generate visual interference on a game in a game process of a player.
And displaying a first animation of the first virtual character releasing the first attack behavior to the second virtual character in the display frame, and displaying a second animation of the second virtual character carrying out the back attack on the first virtual character based on a preset calling-out rule.
The computer equipment can update the position of the display frame in real time according to the moving conditions of the controlled virtual role and the attack object, so that the display frame is prevented from shielding the controlled virtual role and the attack object. For example, when both the controlled virtual character and the attack object move to the left side of the display area, a display frame is generated on the right side of the display area, a first animation of the first virtual character applying a first attack behavior to the second virtual character is displayed in the display frame, and a second animation of the second virtual character conducting a back-click on the first virtual character based on a preset posting rule is displayed.
In some embodiments, the first virtual character and the second virtual character may not be displayed in the display frame, but may be directly displayed in the entire display area.
For example, the first virtual character and the second virtual character both have a certain transparency, so that the controlled virtual character and the attack object are not shielded, and the target skill released by the first virtual character and the antagonistic skill released by the second virtual character also have a certain transparency, so that the controlled virtual character and the attack object are not shielded.
That is to say, in the whole game interface, the controlled virtual character, the attack object, the first virtual character and the second virtual character can be displayed at the same time, and the first virtual character and the second virtual character do not visually obstruct the controlled virtual character and the attack object, and the normal control of the controlled virtual character by the user is not influenced.
When the first virtual character releases the target skill, the user can know the forward shaking motion and the backward shaking motion of the target skill, can know the time length consumed by the forward shaking motion and the backward shaking motion respectively, and can know the attack range, the attack strength and the like of the target skill.
Similarly, when the second virtual character releases the countertechnique, the user can learn about the forward shaking action and the backward shaking action of the countertechnique, can also learn about the time duration consumed by the forward shaking action and the backward shaking action respectively, and can also learn about the attack range, the attack strength and the like of the countertechnique.
Target skills are displayed through the first virtual character, and counterwork skills of the first virtual character are displayed through the second virtual character, so that a player can know a calling and calling mechanism of an attack object, the death times of the controlled virtual character controlled by the player are reduced, and game experience is improved.
In some implementations, the computer device may determine, in response to an end of the second operation, whether the first attack behavior is in effect based on the first animation and the second animation. And if the first attack behavior takes effect, controlling the controlled virtual role to release the first attack behavior to the attack object. And if the first attack behavior is not effective, controlling the controlled virtual role not to release the first attack behavior to the attack object.
It should be noted that, in the process of displaying the first animation and the second animation by the computer device, the controlled virtual character and the attack object in the actual game scene may not be fighting, for example, the controlled virtual character is outside the attack range of the attack object. At the moment, the player can trigger a second operation by manipulating the attack control, and the computer device shows the first animation and the second animation according to the second operation.
For example, when the player long-presses an attack control and then triggers a second operation, such as the attack control being a control of a skill of the controlled virtual character, the computer device displays a first animation and a second animation in the graphical user interface according to the second operation.
After the player finishes watching the first animation and the second animation, stopping long-time pressing of the attack control, and judging whether the first attack behavior of the controlled virtual character in the actual game scene takes effect or not by the computer equipment according to the first animation and the second animation.
In particular, in the first animation and the second animation, the computer device may determine a pre-attack preparation action of the first virtual character prior to imparting the first attack behavior to the second virtual character. And if the first virtual role is not subjected to the back-click of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is effective. And if the first virtual role is subjected to the back attack of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
That is, in the first animation and the second animation, if the first virtual character is attacked by the second virtual character when preparing the forward shake motion of the first attack behavior, it means that the controlled virtual character is attacked by the attack object when performing the first attack behavior on the attack object in the actual fighting scene. The first aggressive behavior is deemed not to be effective at this point. When the player stops operating the attack control, the computer device does not control the controlled virtual character to generate a first attack behavior on the attack object.
In the first animation and the second animation, when the first virtual character prepares the forward shake action of the first attack behavior, the first virtual character is not attacked by the second virtual character, so that the controlled virtual character cannot be attacked by the attack object in the forward shake action when the controlled virtual character executes the first attack behavior on the attack object in the actual fighting scene. The first act of attack is considered to be in effect at this point. When the player stops operating the attack control, the computer device controls the controlled virtual character to generate a first attack behavior on the attack object.
In some embodiments, after the player performs the second operation on the attack control and views the first animation and the second animation, the player may also cancel the second operation, whereupon the computer device deems the first attack behavior not to be effective in accordance with the cancellation of the second operation.
And then the player executes a first operation on the attack control according to the judgment of the player, and then controls the controlled virtual character to execute a first attack behavior on the attack object. The computer device controls the controlled virtual character to execute a first attack behavior according to the first operation.
That is, after the player finishes viewing the first animation and the second animation, the player can roughly know the invitation rule of the attack object, and the player can control the controlled virtual character to invite the attack object according to the judgment of the player, so that the first attack behavior is executed.
In some embodiments, the controlled virtual character and the attack object may be fighting during presentation of the first animation and the second animation, and the computer device may cancel the presented first animation and second animation in response to the controlled virtual character being attacked.
In some embodiments, during presentation of the first animation and the second animation, the player may directly end the second operation, such as canceling the long press attack control, at which point the computer device may cancel the presented first animation and second animation at the end of the second operation.
In some embodiments, the computer device may also be responsive to player actions with respect to other controls during presentation of the first animation and the second animation, such as where the player may control the controlled virtual character to move, or release some other skill beyond the target skill while moving.
In the embodiment of the application, a terminal device provides a graphical user interface, content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene. The terminal device can also respond to a second operation aiming at the attack control, and generate a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule. And finally, the terminal equipment controls to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character performing a return attack based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to an attack object through an attack control. Thereby showing the skills of different characters in the game and helping the user to know the skill release mechanism of different characters.
For a more detailed understanding of the skill display method provided in the embodiment of the present application, please refer to fig. 3, and fig. 3 is a second flowchart of the skill display method provided in the embodiment of the present application. The skill display method can comprise the following steps:
201. in response to a second operation on the attack control, a first virtual character simulating the controlled virtual character and a second virtual character simulating the attack object are generated in the graphical user interface based on the attack object specified by the second operation.
In the embodiment of the application, the skill display method can be applied to a computer device, such as various terminal devices, and a graphical user interface is provided through the terminal devices, content displayed by the graphical user interface at least includes a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene.
For example, in some game scenes including an attack object, the attack object may be a computer virtual character operated by a preset program, a player may fight with the attack object by operating a controlled virtual character, in the fighting process, the player may click, touch, and the like on an attack control to trigger a first operation, and a terminal device responds to the first operation to control the controlled virtual character to perform a first attack behavior on the attack object, such as releasing skill or common attack, and the like.
That is, the player can perform the first operation on the attack control according to the decision of the player, so as to control the controlled virtual character to directly attack the attack object.
It should be noted that the attack object is a preset virtual object, the attack object has a preset enrollment rule, and the attack object may execute the corresponding enrollment rule according to a position moving mode, an attack mode, and the like of the controlled virtual character in the game.
In some embodiments, when the preset recruitment rules of the player for some attack objects are not well known, if the player hastily handles the controlled virtual character to attack the attack object, the attack object attacks the controlled virtual character back, and the controlled virtual character may be directly killed, so that the player has to fight with the attack object again.
When the player encounters some attack object which is not well known, the player can perform a second operation on the attack control so as to know the calling rules of the attack object and the attack mode of the controlled virtual character on the attack object.
The player can perform long-time pressing, dragging and the like on the attack control to trigger a second operation, the second operation is different from the first operation, the first operation can be understood as that the player directly controls the controlled virtual character to perform a first attack action on the attack object, for example, the first operation can be triggered by the player by clicking the attack control.
In the game scene, the computer device may generate, in the graphical user interface, a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object based on the attack object specified by the second operation.
For example, the player may trigger a second operation and designate an attack object in the game scene by long-pressing the attack control, and the computer device generates a first virtual character simulating the controlled virtual character and a second virtual character simulating the attack object in the graphical user interface according to the second operation.
In some embodiments, the trigger factor of the player for the first operation and the second operation of the attack control can be various, and the above manner of triggering the first operation and the second operation is only an example and is not a limitation to the present application.
After the computer device responds to the second operation, the computer device may generate a first virtual character independent of the controlled character and a second virtual character independent of the attack object at the current game interface.
It should be noted that the first virtual character and the second virtual character are two different characters, and the first virtual character and the second virtual character can be visually distinguished. The first display parameters of the first avatar are different from the display parameters of the controlled avatar, and the first avatar is visually distinguishable from the controlled avatar. The second display parameters of the second virtual character are different from the display parameters of the attack object, and the second virtual character can be visually distinguished from the attack object.
The display parameters respectively corresponding to the first virtual character, the second virtual character, the controlled virtual character and the attack object may include various parameters such as transparency, color, contrast, brightness and the like.
In some implementations, during generation of the first virtual character and the second virtual character by the computer device, the computer device can determine first color shade information for the controlled virtual character, determine a first transparency of the first virtual character based on the first color shade information, and generate the first virtual character based on the first transparency.
The computer device may determine second color shade information of the attack object, determine a second transparency of the second virtual character according to the second color shade information, and generate the second virtual character according to the second transparency.
For example, a quantization range may be set for different shades of color of the controlled avatar, such as 0-100. A first transparency range, such as 0 to 100, may also be set. Each quantization value in the quantization range has a mapping value within a first transparency range. For example, when the quantized value corresponding to the shade of the color of the controlled virtual character is determined, the corresponding mapping value may be determined within a first transparency range, the mapping value may be determined as a first transparency corresponding to the first virtual character, and then the first virtual character may be generated according to the first transparency.
For example, a quantization range may be set for different shades of the attack object, for example, the quantization range is 0 to 100. A second transparency range, such as 0 to 50, may also be provided. Each quantized value in the quantized range of the attack object has a mapping value in the second transparency range. For example, when the quantized value corresponding to the shade of the color of the attack object is determined, the corresponding mapping value may be determined within the second transparency range, the mapping value may be determined as the second transparency corresponding to the second virtual character, and then the second virtual character may be generated according to the second transparency.
Wherein the first avatar is lighter in color, or translucent, relative to the controlled avatar, the first avatar being observable to the human eye. The second avatar is lighter in color, or liquid transparent, relative to the attack object, and the second avatar can be observed by the human eye.
202. A display frame is provided in a blank area of the graphical user interface.
In some embodiments, prior to displaying the first avatar and the second avatar in the graphical user interface, the computer device may also determine a blank area in the graphical user interface and then generate to display a frame within the blank area.
The computer device then generates a first virtual character and a second virtual character within the display frame. The blank area does not shield any control of an attack object, a controlled virtual character and a graphical user interface, and the first virtual character and the second virtual character in the display frame do not generate visual interference on a game in a game process of a player.
The computer equipment can update the position of the display frame in real time according to the moving conditions of the controlled virtual role and the attack object, so that the display frame is prevented from shielding the controlled virtual role and the attack object. For example, when both the controlled virtual character and the attack object move to the left side of the display area, a display frame is generated on the right side of the display area.
Referring to fig. 4, fig. 4 is a schematic view of a first scenario of a skill demonstration method according to an embodiment of the present application.
The controlled virtual role is A1, the attack object is B1, the first virtual role is A2, the second virtual role is B2, and the display frame is S1.
As can be seen from fig. 4, the first virtual character A2 and the second virtual character B2 are displayed in the corresponding display frame S1 in the graphical user interface, and the display frame is not hidden by the controlled virtual character A1 and the attack object B1, so that the operation of the user in playing the game is not affected.
203. And controlling to display a first animation of the first virtual character releasing the first attack behavior to a second virtual character in the display frame, and displaying a second animation of the second virtual character conducting a back click based on a preset calling-out rule.
In some embodiments, the computer device may display the first animation and the second animation, and the controlled virtual character and the attack object in the actual game scene may not fight, for example, the controlled virtual character is out of the attack range of the attack object. At the moment, the player can trigger a second operation by manipulating the attack control, and the computer device shows the first animation and the second animation according to the second operation.
As shown in fig. 4 in particular, in the display frame S1, the first virtual character may release the target skill for the second virtual character, thereby showing a first animation applying the first attack behavior to the second virtual character. And releasing the antagonistic skills when the second virtual character clicks back on the first virtual character, thereby showing a second animation for clicking back on the first virtual character.
It can be understood that the first animation and the second animation can be exhibited by the first virtual character and the second virtual character during the fighting process, so that the player can know the calling mechanism of the attack object corresponding to the second virtual character.
In some embodiments, the first virtual character and the second virtual character may not be displayed in the display frame, but may be directly displayed in the entire display area.
For example, the first virtual character and the second virtual character both have certain transparency, so that the controlled virtual character and the attack object are not shielded, and the target skill released by the first virtual character and the antagonistic skill released by the second virtual character also have certain transparency, so that the controlled virtual character and the attack object are not shielded.
Specifically, referring to fig. 5, fig. 5 is a schematic diagram of a second scenario of the skill demonstration method according to the embodiment of the present application.
The controlled virtual role is A1, the attack object is B1, the first virtual role is A2, and the second virtual role is B2. Since the first virtual character A2 and the second virtual character B2 are visually transparent to some extent, the second virtual character B2 can see the controlled virtual character A1 and the attack object B1 through the first virtual character A2. The visual interference can not be caused when the user plays the game.
That is to say, in the whole game interface, the controlled virtual character, the attack object, the first virtual character and the second virtual character can be displayed at the same time, and the first virtual character and the second virtual character do not visually obstruct the controlled virtual character and the attack object, and the normal control of the controlled virtual character by the user is not influenced.
It should be noted that, when the first animation of the first attack behavior is released from the first virtual character to the second virtual character is displayed, the user can control the forward shaking motion and the backward shaking motion of the target skill of the virtual character to learn, can also learn the time duration consumed by the forward shaking motion and the backward shaking motion respectively, and can also learn the attack range, the attack strength, and the like of the target skill.
Similarly, when the second animation in which the second virtual character performs the back-click based on the preset posting rule is displayed, the user can learn the forward-shaking action and the backward-shaking action of the countermeasure skill of the attack object, can also learn the respective time lengths consumed by the forward-shaking action and the backward-shaking action, and can also learn the attack range, the attack strength and the like of the countermeasure skill.
By displaying the first animation and the second animation, a user can know the calling mechanism of an attack object and the calling mode of the controlled virtual character, so that the user is helped to reasonably release skills, the death times of the controlled virtual character controlled by the user are reduced, and the game experience is improved.
In some embodiments, the first virtual character, the first animation, the second virtual character, and the second animation may be dismissed from display in the graphical user interface after the first animation and the second animation are presented. The controlled virtual character and the attack object are only displayed in the graphical user display interface.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a third scenario of the skill displaying method according to the embodiment of the present disclosure.
After the first animation and the second animation are presented, the controlled virtual character A1 and the attack object B1 are left in the graphical user interface. The first virtual character A2 and the second virtual character B2 are not displayed.
In some embodiments, the controlled virtual character and the attack object may be fighting while the first animation and the second animation are presented, and the computer device may cancel the first animation and the second animation presented in response to the controlled virtual character being attacked.
In some embodiments, during presentation of the first animation and the second animation, the player may directly end the second operation, such as canceling the long press attack control, at which point the computer device may cancel the presented first animation and second animation at the end of the second operation.
In some embodiments, the computer device may also be responsive to player actions with respect to other controls during presentation of the first animation and the second animation, such as the player may control the controlled virtual character to move, or release some other skill beyond the target skill while moving.
In some embodiments, during the presentation of the first animation and the second animation, the computer device may determine an attack available period of the controlled virtual character when the attack object is recruited according to a preset recruiting rule. And generating second prompt information in the graphical user interface within the attack time interval, wherein the second prompt information is used for prompting the controlled virtual role to release the first attack behavior.
For example, during the process of showing the first animation and the second animation, the controlled virtual character and the attack object may fight, and at this time, the user triggers the second operation through the attack control, so as to trigger the showing of the first animation and the second animation. In an actual game scene, an attack object may be ready to attack the controlled virtual character, when the attack object is called according to a preset calling rule, the attack object has an attack pre-shaking action, and in the execution process of the attack pre-shaking action of the attack object, the computer equipment can generate second prompt information in the graphical user interface so as to remind a user of controlling the controlled virtual character to release a first attack action on the attack object at the moment.
204. A pre-attack preparation action is determined for the first virtual character prior to imparting the first attack behavior to the second virtual character.
In some implementations, after presenting the first animation and the second animation, based on the first animation and the second animation, the computer device may determine a pre-attack preparation action for the first virtual character prior to imparting the first attack behavior to the second virtual character. For example, the pre-attack preparation action may be a pre-attack panning action when the first avatar releases the target skill.
205. And if the first virtual role is back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
If the first virtual character is knocked back by the second virtual character in the execution process of the shake-before-attack action when the first virtual character releases the target skill, it is shown that in an actual game scene, if the user controls the controlled virtual character to attack the attack object in this way, the first virtual character is knocked back by the attack object in the shake-before-attack action process.
At this point, the computer device determines that the controlled virtual character does not impose the first attack behavior on the attack.
206. And controlling the controlled virtual role not to release the first attack behavior to the attack object.
After the first attack behavior released by the first virtual character to the second virtual character is determined not to be effective, the computer equipment controls the controlled virtual character not to release the first attack behavior to the attack object, and therefore the controlled virtual character is prevented from being attacked and damaged by the attack object in the shaking process before attack in the actual game scene.
207. And if the first virtual role is not subjected to the back-click of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack behavior takes effect.
If the first virtual character is not attacked by the second virtual character in the execution process of the pre-attack shaking action when the first virtual character releases the target skill, it indicates that in an actual game scene, if the user controls the controlled virtual character to attack the attack object in this way, the attack object is not attacked back in the pre-attack shaking action process.
At this point, the computer device determines that the controlled virtual character is in effect for delivering the first attack behavior to the attack.
208. And controlling the controlled virtual role to release the first attack behavior to the attack object.
After determining that the first attack behavior released by the controlled virtual role to the attack takes effect, the computer device may directly control the controlled virtual role to release the first attack behavior to the attack object. Such as controlling the controlled virtual character to apply corresponding attack skills to the attack object.
In some embodiments, some users may actively cancel the second operation after triggering the second operation through the attack control, such as dragging the attack control to a cancel area and then canceling the second operation. And then the user controls the controlled virtual character and the attack object to fight according to own operation. At this point, releasable skill tips for the controlled virtual character may be presented in the graphical user interface during the course of the battle.
209. Determining a backstroke action of the attack object backstroke the controlled virtual character, responding to the backstroke action, and generating first prompt information in the graphical user interface, wherein the first prompt information is used for prompting the controlled virtual character to avoid the backstroke action.
In some embodiments, after controlling the controlled virtual character to release the first attack behavior on the attack object, the attack object performs a click back according to a preset posting rule, and at this time, the computer device may generate, in response to the click back action, first prompt information in the graphical user interface, where the first prompt information is used to prompt the controlled virtual character to avoid the click back action.
Referring to fig. 7, fig. 7 is a fourth schematic view of a skill demonstration method according to an embodiment of the present disclosure.
The controlled virtual role is A1, the attack object is B1, the first virtual role is A2, and the second virtual role is B2. The information prompt window is S2. The first prompt information may be displayed in the information prompt window S2.
After the controlled virtual character is controlled to release the first attack behavior to the attack object, the attack object can perform backshock according to a preset enrollment rule, and at the moment, information for reminding a user to control the controlled virtual character to walk away and avoid can be displayed in the information prompt window S2.
In the embodiment of the application, the computer device generates a first virtual character simulating the controlled virtual character and a second virtual character simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation on the attack control. Providing a display frame in a blank area of the graphical user interface, controlling to display a first animation of the first virtual character to apply a first attack behavior to a second virtual character in the display frame, and displaying a second animation of the second virtual character to perform a return attack based on a preset calling-out rule.
The computer device may determine a pre-attack preparation action of the first virtual character prior to releasing the first attack behavior to the second virtual character. And if the first virtual role is subjected to the back attack of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack behavior is not effective, and controlling the controlled virtual role not to release the first attack behavior to the attack object.
And if the first virtual role is not subjected to the back attack of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is effective, and controlling the controlled virtual role to release the first attack action to the attack object.
And finally, the computer equipment determines the back-click action of the attack object for back-clicking the controlled virtual character, and generates first prompt information in the graphical user interface in response to the back-click action, wherein the first prompt information is used for prompting the controlled virtual character to avoid the back-click action.
In the embodiment of the application, the user can display the animation of the attack object according to the preset enrollment rule through the attack control, so that the user can know the enrollment rule of the attack object, and meanwhile, the controlled role can be directly triggered to release the first attack behavior to the attack object according to the end of the second operation, the controlled virtual role controlled by the user is prevented from being frequently attacked and killed by the attack object, and the game experience of the user is improved.
Correspondingly, the embodiment of the application provides a skill display device, as shown in fig. 8, fig. 8 is a schematic structural diagram of the skill display device provided in the embodiment of the application. The terminal device provides a graphical user interface, content displayed by the graphical user interface at least comprises a part of game scene, a controlled virtual character located in the game scene and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene. The skill display 300 may include:
a generating module 310, configured to generate, in response to a second operation on the attack control, a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule.
The control module 320 is configured to control displaying a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface, and displaying a second animation of the second virtual character performing a back-click based on a preset posting rule, so as to simulate the controlled virtual character releasing the first attack behavior to the attack object through the attack control.
The control module 320 is further configured to determine, in response to the end of the second operation, whether the first aggressive behavior takes effect based on the first animation and the second animation;
if the first attack behavior is effective, controlling the controlled virtual role to release the first attack behavior to the attack object;
and if the first attack behavior is not effective, controlling the controlled virtual role not to release the first attack behavior to the attack object.
The control module 320 is further configured to determine a pre-attack preparation action of the first virtual character before the first attack behavior is released to the second virtual character;
if the first virtual role is not subjected to the back-click of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is effective;
and if the first virtual role is back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
The control module 320 is further configured to determine a click-back action of the attack object for clicking back the controlled virtual character after the step of controlling the controlled virtual character to release the first attack behavior to the attack object;
and responding to the backstroke action, generating first prompt information in the graphical user interface, wherein the first prompt information is used for prompting the controlled virtual character to avoid the backstroke action.
The control module 320 is further configured to cancel the first animation and the second animation displayed in response to the end of the second operation or the attack on the controlled virtual character during the displaying of the first animation and the second animation.
The control module 320 is further configured to determine an attack time period of the controlled virtual character when the attack object is recruited according to a preset recruiting rule in the process of displaying the first animation and the second animation;
the control module 320 is further configured to generate second prompt information in the graphical user interface within the attack-capable time period, where the second prompt information is used to prompt the controlled virtual character to release the first attack behavior.
Providing a display frame in a blank area of the graphical user interface;
and controlling to display a first animation of the first virtual character releasing the first attack behavior to a second virtual character in the display frame, and displaying a second animation of the second virtual character conducting a back click based on a preset calling-out rule.
In the embodiment of the application, a terminal device provides a graphical user interface, content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene. The terminal device can also respond to a second operation aiming at the attack control, and generate a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset enrollment rule. And finally, the terminal equipment controls to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character performing a return attack based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to an attack object through an attack control. Thereby showing the skills of different characters in the game and helping the user to know the skill release mechanism of different characters.
Correspondingly, the embodiment of the present application further provides a Computer device, where the Computer device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and operable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices, and may include more or fewer components than those illustrated, or combinations of certain components, or different arrangements of components.
The processor 401 is a control center of the computer device 400, connects the respective parts of the entire computer device 400 using various interfaces and lines, performs various functions of the computer device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device 400 as a whole.
In the embodiment of the present application, the processor 401 in the computer device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
the method comprises the steps that a terminal device provides a graphical user interface, the content displayed by the graphical user interface at least comprises a part of game scene, a controlled virtual character located in the game scene and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene.
In response to a second operation on the attack control, generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule.
And controlling to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character conducting back-click based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control.
The processor 401 is also configured to implement the functions of:
in response to the end of the second operation, judging whether the first attack behavior takes effect based on the first animation and the second animation;
if the first attack behavior is effective, controlling the controlled virtual role to release the first attack behavior to the attack object;
and if the first attack behavior is not effective, controlling the controlled virtual role not to release the first attack behavior to the attack object.
The processor 401 is also configured to implement the functions of:
determining a pre-attack preparation action of the first virtual character before the first attack behavior is released to the second virtual character;
if the first virtual role is not subjected to the back-click of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is effective;
and if the first virtual role is back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
The processor 401 is also configured to implement the functions of:
after the step of controlling the controlled virtual role to release the first attack behavior to the attack object, determining a attack back action of the attack object to back-click the controlled virtual role;
and responding to the backstroke action, generating first prompt information in the graphical user interface, wherein the first prompt information is used for prompting the controlled virtual character to avoid the backstroke action.
The processor 401 is also configured to implement the functions of:
and in the process of showing the first animation and the second animation, in response to the end of the second operation or the attack on the controlled virtual character, canceling the shown first animation and second animation.
The processor 401 is also configured to implement the functions of:
in the process of displaying the first animation and the second animation, determining the attack time interval of the controlled virtual character when the attack object is called according to a preset calling rule;
and generating second prompt information in the graphical user interface within the attack time interval, wherein the second prompt information is used for prompting the controlled virtual role to release the first attack behavior.
The processor 401 is also configured to implement the functions of:
providing a display frame in a blank area of the graphical user interface;
and controlling to display a first animation of the first virtual character releasing the first attack behavior to a second virtual character in the display frame, and displaying a second animation of the second virtual character conducting a back click based on a preset calling-out rule.
The processor 401 is also configured to implement the functions of:
determining a first display parameter corresponding to the first virtual character, and displaying the first virtual character based on the first display parameter, wherein the first display parameter is different from the display parameter corresponding to the controlled virtual character;
and determining a second display parameter corresponding to the second virtual character, and displaying the second virtual character based on the second display parameter, wherein the second display parameter is different from the display parameter corresponding to the attack object.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 9, the computer device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display 403, the rf circuit 404, the audio circuit 405, the input unit 406, and the power source 407 respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 9 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 403 may also be used as a part of the input unit 406 to implement an input function.
In the embodiment of the present application, a game application is executed by the processor 401 to generate a graphical user interface on the touch display screen 403, where a virtual scene on the graphical user interface includes at least one skill control area, and the skill control area includes at least one skill control. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401, and then sent to, for example, another computer device via the radio frequency circuit 404, or output to the memory 402 for further processing. Audio circuitry 405 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown in fig. 9, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment provides a graphical user interface, where the content displayed by the graphical user interface at least includes a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to control the controlled virtual character to release a first attack behavior in the game scene in response to a first operation. The computer device can also generate a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation on the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule. And finally, the computer equipment controls to display a first animation of the first virtual character for releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character for carrying out the return attack based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control. Thereby showing the skills of different characters in the game and helping the user to know the skill release mechanism of different characters.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the embodiment of the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any skill demonstration method provided by the embodiment of the present application. For example, the computer program may perform the steps of:
the method comprises the steps that a terminal device provides a graphical user interface, the content displayed by the graphical user interface at least comprises a part of game scene, a controlled virtual character located in the game scene and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene.
Generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to the second operation on the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset enrollment rule.
And controlling to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character conducting back-click based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control.
The computer program is further for performing:
in response to the end of the second operation, judging whether the first attack behavior takes effect or not based on the first animation and the second animation;
if the first attack behavior is effective, controlling the controlled virtual role to release the first attack behavior to the attack object;
and if the first attack behavior is not effective, controlling the controlled virtual role not to release the first attack behavior to the attack object.
The computer program is also for performing:
determining a pre-attack preparation action of a first virtual role before a first attack action is released to a second virtual role;
if the first virtual role is not subjected to the back-click of the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is effective;
and if the first virtual role is back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
The computer program is further for performing:
after the step of controlling the controlled virtual role to release the first attack behavior to the attack object, determining a attack back action of the attack object to back-click the controlled virtual role;
and responding to the backstroke action, and generating first prompt information in the graphical user interface, wherein the first prompt information is used for prompting the controlled virtual character to avoid the backstroke action.
The computer program is further for performing:
and in the process of showing the first animation and the second animation, in response to the end of the second operation or the attack on the controlled virtual character, canceling the shown first animation and second animation.
The computer program is further for performing:
in the process of displaying the first animation and the second animation, determining the attack time interval of the controlled virtual character when the attack object is called according to a preset calling rule;
and generating second prompt information in the graphical user interface within the attack time interval, wherein the second prompt information is used for prompting the controlled virtual role to release the first attack behavior.
The computer program is also for performing:
providing a display frame in a blank area of the graphical user interface;
and controlling to display a first animation of the first virtual character releasing the first attack behavior to a second virtual character in the display frame, and displaying a second animation of the second virtual character conducting a back click based on a preset calling-out rule.
The computer program is also for performing:
determining a first display parameter corresponding to the first virtual character, and displaying the first virtual character based on the first display parameter, wherein the first display parameter is different from the display parameter corresponding to the controlled virtual character;
and determining a second display parameter corresponding to the second virtual character, and displaying the second virtual character based on the second display parameter, wherein the second display parameter is different from the display parameter corresponding to the attack object.
In the embodiment of the application, a terminal device provides a graphical user interface, content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene. The terminal device can also respond to a second operation aiming at the attack control, and generate a first virtual role for simulating the controlled virtual role and a second virtual role for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset enrollment rule. And finally, the terminal equipment controls to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character returning based on a preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control. Thereby showing the skills of different characters in the game and helping the user to know the skill release mechanism of different characters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium may execute the steps in any skill display method provided in the embodiments of the present application, beneficial effects that can be achieved by any skill display method provided in the embodiments of the present application may be achieved, for details, see the foregoing embodiments, and are not described herein again.
The skill display method, the skill display device, the skill display storage medium, and the computer device provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. The skill display method is characterized in that a graphical user interface is provided through a terminal device, the content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene; the method comprises the following steps:
generating a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation in response to a second operation for the attack control; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule;
and controlling to display a first animation of the first virtual character to release the first attack behavior to the second virtual character on the graphical user interface and display a second animation of the second virtual character to perform a back attack based on the preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control.
2. The skill demonstration method of claim 1 further comprising:
in response to the end of the second operation, determining whether the first aggressive behavior takes effect based on the first animation and the second animation;
if the first attack behavior is effective, controlling the controlled virtual role to release the first attack behavior to the attack object;
and if the first attack behavior is not effective, controlling the controlled virtual role not to release the first attack behavior to the attack object.
3. The skill demonstration method of claim 2 wherein the step of determining whether the first aggressive behavior is effective based on the first animation and the second animation comprises:
determining a pre-attack preparation action of the first virtual character prior to releasing the first attack behavior to the second virtual character;
if the first virtual role is not back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack behavior is effective;
and if the first virtual role is back-clicked by the second virtual role in the execution process of the preparation action before the attack, determining that the first attack action is not effective.
4. The skill demonstration method of claim 2 wherein after the step of controlling the controlled virtual character to apply the first aggressive behavior to the attack object, the method further comprises:
determining a backstroke action of the attack object backstroke the controlled virtual role;
and responding to the backstroke action, generating first prompt information in the graphical user interface, wherein the first prompt information is used for prompting the controlled virtual character to avoid the backstroke action.
5. The skill demonstration method according to claim 1, characterized in that the method further comprises:
and in the process of showing the first animation and the second animation, in response to the end of the second operation or the attack on the controlled virtual character, canceling the shown first animation and the second animation.
6. The skill demonstration method of claim 1 further comprising:
in the process of displaying the first animation and the second animation, determining the attack time interval of the controlled virtual character when the attack object is called according to the preset calling rule;
and generating second prompt information in the graphical user interface within the attack time period, wherein the second prompt information is used for prompting the controlled virtual role to release the first attack behavior.
7. The skill demonstration method of claim 1 wherein the step of controlling the demonstration of a first animation of the first virtual character applying the first attack behavior to the second virtual character on the graphical user interface and a second animation of the second virtual character replying based on the preset enrollment rules comprises:
providing a display frame in a blank area of the graphical user interface;
and controlling to display a first animation of the first virtual character releasing the first attack behavior to the second virtual character in the display frame, and displaying a second animation of the second virtual character conducting a return attack based on the preset calling-out rule.
8. The skill demonstration method of claim 1 further comprising:
determining a first display parameter corresponding to the first virtual character, and displaying the first virtual character based on the first display parameter, wherein the first display parameter is different from a display parameter corresponding to the controlled virtual character;
and determining a second display parameter corresponding to the second virtual role, and displaying the second virtual role based on the second display parameter, wherein the second display parameter is different from the display parameter corresponding to the attack object.
9. The skill display device is characterized in that a graphical user interface is provided through a terminal device, the content displayed by the graphical user interface at least comprises a part of a game scene, a controlled virtual character located in the game scene, and at least one attack control, and the attack control is configured to respond to a first operation and control the controlled virtual character to release a first attack behavior in the game scene; the device comprises:
a generating module, configured to generate, in response to a second operation on the attack control, a first virtual character for simulating the controlled virtual character and a second virtual character for simulating the attack object in the graphical user interface based on the attack object specified by the second operation; the first operation is different from the second operation, and the attack object is a virtual object which attacks according to a preset posting rule;
and the control module is used for controlling the first animation which shows the first virtual character to release the first attack behavior to the second virtual character on the graphical user interface and the second animation which shows the second virtual character to carry out back attack based on the preset calling-out rule so as to simulate the controlled virtual character to release the first attack behavior to the attack object through the attack control.
10. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the skill presentation method of any one of claims 1 to 8.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the skill demonstration method of any one of claims 1 to 8.
CN202210975578.3A 2022-08-15 2022-08-15 Skill display method and device, storage medium and computer equipment Pending CN115317908A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210975578.3A CN115317908A (en) 2022-08-15 2022-08-15 Skill display method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210975578.3A CN115317908A (en) 2022-08-15 2022-08-15 Skill display method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN115317908A true CN115317908A (en) 2022-11-11

Family

ID=83922829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210975578.3A Pending CN115317908A (en) 2022-08-15 2022-08-15 Skill display method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN115317908A (en)

Similar Documents

Publication Publication Date Title
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN111760274A (en) Skill control method and device, storage medium and computer equipment
CN113082688A (en) Method and device for controlling virtual character in game, storage medium and equipment
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
WO2023005234A1 (en) Virtual resource delivery control method and apparatus, computer device, and storage medium
CN115193049A (en) Virtual role control method, device, storage medium and computer equipment
CN113398566A (en) Game display control method and device, storage medium and computer equipment
CN113181632A (en) Information prompting method and device, storage medium and computer equipment
CN112843716A (en) Virtual object prompting and viewing method and device, computer equipment and storage medium
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN115193064A (en) Virtual object control method and device, storage medium and computer equipment
CN115999153A (en) Virtual character control method and device, storage medium and terminal equipment
CN114159789A (en) Game interaction method and device, computer equipment and storage medium
CN115317908A (en) Skill display method and device, storage medium and computer equipment
CN113867873A (en) Page display method and device, computer equipment and storage medium
CN113426115A (en) Game role display method and device and terminal
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
US20240131434A1 (en) Method and apparatus for controlling put of virtual resource, computer device, and storage medium
CN113398590A (en) Sound processing method, sound processing device, computer equipment and storage medium
CN115212566A (en) Virtual object display method and device, computer equipment and storage medium
CN115193035A (en) Game display control method and device, computer equipment and storage medium
CN117654028A (en) Game display control method and device, computer equipment and storage medium
CN116850594A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination