CN115707499A - Skill effect display method and device - Google Patents

Skill effect display method and device Download PDF

Info

Publication number
CN115707499A
CN115707499A CN202110956065.3A CN202110956065A CN115707499A CN 115707499 A CN115707499 A CN 115707499A CN 202110956065 A CN202110956065 A CN 202110956065A CN 115707499 A CN115707499 A CN 115707499A
Authority
CN
China
Prior art keywords
skill
game
target
animation
black box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110956065.3A
Other languages
Chinese (zh)
Inventor
梁秋实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Chongqing Interactive Technology Co ltd
Original Assignee
Perfect World Chongqing Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Chongqing Interactive Technology Co ltd filed Critical Perfect World Chongqing Interactive Technology Co ltd
Priority to CN202110956065.3A priority Critical patent/CN115707499A/en
Publication of CN115707499A publication Critical patent/CN115707499A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a skill effect display method and a skill effect display device, wherein the method comprises the following steps: responding to a preset instruction aiming at the target skill in the game, and creating a black box space in a game scene; selecting a target animation script matched with a preset instruction from animation scripts configured for various skills in a game in advance; calling game resources in a game scene based on the target animation script to render a target animation containing a target skill in a black box space; and displaying the target animation in the interactive interface of the game so as to realize the display of the target skill effect. According to the method, the target skill effect is directly rendered in the interactive interface of the game, the target skill can be displayed in real time without using extra resources, the display efficiency of the skill effect is greatly improved, and the experience of a player is improved. In addition, the method avoids the problem of overlarge game installation files caused by the introduction of the skill effect and greatly reduces the distribution processing pressure of the game installation files.

Description

Skill effect display method and device
Technical Field
The application belongs to the field of data processing, and particularly relates to a skill effect display method and device.
Background
Currently, in games, various skill effects are usually introduced to a player by using text description for the player to choose. However, the text introduction method often fails to intuitively reflect the actual situation of skill effect, and is easy to cause misunderstanding for the player.
In order to avoid misunderstanding caused by the conventional scheme, in the related art, the skill effect is introduced through a pre-recorded video. Because the storage space occupied by the video file is large, the data volume of the game installation file is greatly increased, and pressure is brought to the distribution processing of the game installation file.
Disclosure of Invention
The application provides a skill effect display method and device, which are used for reducing the processing pressure of a server and improving the smoothness of a game.
In a first aspect, the present application provides a skill effect display method, including:
responding to a preset instruction aiming at target skills in a game, and creating a black box space in a game scene, wherein the black box space is in a player invisible area in the game scene;
selecting a target animation script matched with the preset instruction from animation scripts configured for various skills in the game in advance;
invoking a game resource in a game scene based on the target animation script to render a target animation containing the target skill in the black box space;
and displaying the target animation in an interactive interface of the game so as to show a skill effect corresponding to the target skill to the player.
In a second aspect, an embodiment of the present application provides a skill effect display apparatus, including:
the creating module is used for responding to a preset instruction aiming at the target skill in the game and creating a black box space in the game scene, wherein the black box space is in a player invisible area in the game scene;
the selection module is used for selecting a target animation script matched with the preset instruction from animation scripts configured for various skills in the game in advance;
the calling module is used for calling game resources in a game scene based on the target animation script so as to render a target animation containing the target skill in the black box space;
and the display module is used for displaying the target animation in an interactive interface of the game so as to display the skill effect corresponding to the target skill to the player.
In a third aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the skill effect presentation method according to any one of the first aspect.
In a fourth aspect, a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the skill effect presentation method of any one of the first aspect.
In the technical scheme provided by the embodiment of the application, a black box space is created in a game scene in response to a preset instruction for a target skill in a game, and the black box space is in a player invisible area in the game scene. On the basis of a black box space, selecting a target animation script matched with a preset instruction from animation scripts configured for various skills in a game in advance, calling game resources in a game scene based on the target animation script, and rendering a target animation containing the target skill in the black box space, so that the target animation is displayed in an interactive interface of the game, and the display of the target skill effect is realized.
In the scheme, the black box space is located in a game scene, the game resources called by the target animation are also located in the game scene, the target skill effect is directly rendered in the interactive interface of the game, the target skill can be displayed in real time without using extra resources, the display efficiency of the skill effect is greatly improved, and the player experience is improved. And the storage space occupied by the animation script configured for the target skill is far smaller than that of the video file, so that the scheme can effectively avoid the problem of overlarge game installation files caused by introduction of videos due to the skill storage effect, and greatly reduce the distribution processing pressure of the game installation files.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a skill effect display method according to an embodiment of the present application;
FIG. 2 is a schematic view of an interactive interface according to an embodiment of the present application;
FIG. 3 is a schematic view of another interactive interface of an embodiment of the present application;
FIG. 4 is a schematic illustration of yet another interactive interface according to an embodiment of the present application;
FIG. 5 is a schematic view of a skill effect demonstration apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In the game, in order to enrich the game experience of the player, a plurality of skills are set aiming at different roles or different playing methods. Each skill has its own unique trigger mode, skill effects such as action subject, action range, attack injury value, defense value, visual characteristics.
In order to facilitate the player to select a target skill suitable for his or her own situation, the player is usually first introduced with skill effects corresponding to various skills. At present, there are two common introduction ways, namely text description or video introduction.
In one aspect, in the related art, textual descriptions are used to introduce various skill effects to players for selection. However, the text introduction method is difficult to intuitively reflect the actual condition of skill effect, and is easy to cause misunderstanding for the player.
On the other hand, in order to avoid misunderstanding caused by description of the skill effect by characters, in the related art, the skill effect is introduced through a pre-recorded video. Because the storage space occupied by the video file is large, the data volume of the game installation file can be greatly increased, and pressure is brought to the distribution processing of the game installation file.
In addition, in the game client, due to the performance limitation of the playing device, the situation that the loading time is too long, and even the picture is stuck may occur when the pre-recorded video is played.
In order to solve at least one technical problem, a skill effect display scheme is provided in the embodiments of the present application.
The skill effect display scheme provided by the embodiment of the application can be executed by an electronic device, and the electronic device can be a terminal device such as a smart phone, a tablet computer, a PC, a notebook computer, and the like. In an alternative embodiment, the electronic device may have a service program installed thereon for executing the skill effect presentation scheme.
In the embodiment of the present application, the execution subject of the skill effect display scheme may be a game client or a server, or a game editor, or may be other electronic devices or application programs loaded on the electronic devices, which is not limited in the present application.
Fig. 1 is a flowchart of a skill effect demonstration method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
101. responding to a preset instruction aiming at the target skill in the game, and creating a black box space in a game scene;
102. selecting a target animation script matched with a preset instruction from animation scripts configured for various skills in a game in advance;
103. calling game resources in a game scene based on the target animation script to render a target animation containing a target skill in a black box space;
104. and displaying the target animation in the interactive interface of the game so as to realize the display of the target skill effect.
In the embodiment of the application, the game scene is the current scene of the virtual character. The primary consideration here is the first virtual character controlled by the player.
In practical application, the game scene changes with the game type. The game scene may be a high degree of freedom scene where all players are visible to each other, such as: and in the open world scene, the player can control the virtual character to freely walk in the open world and chat with other players. Alternatively, the game scene may be a single terrain scene, such as: the third person is called a battle room in the shooting game. Alternatively, the game scene may be a scene switched with the game scenario.
In the embodiment of the application, the preset instruction for a certain skill is mainly used for representing the viewing intention of a player for a target skill. Optionally, controls or instructions for triggering various skill introduction procedures are preconfigured in the game. For differentiation, the skill that the player intends to view is referred to as the target skill.
It is noted that the target skill may be a single skill or a combination of multiple skills. For example, the introduction page of each character is entered, and some skill effect that the character can use, such as opening a bow and an arrow, is viewed, or a skill combination that the character can use, such as a skill combination of sward attack and shield defense is also viewed.
In an alternative embodiment, the preset instructions may be player-triggered, such as preview instructions for the target skills. For example, fig. 2 shows a skill introduction interface in which a preview control of the target skill a is selected to trigger presentation of the effect of the target skill a. In practical application, the preview instruction may also be a skill viewing instruction, a professional skill browsing instruction, a role introduction instruction, and the like, which is not limited in the present application.
In another embodiment, the preset instruction may be automatically generated by the game client or the server. For example, the preset instruction may be a game level triggering instruction. In particular, it is assumed that in a certain game level or within a certain area in the game, the special skills of the current level or area binding can be used. In this case, if it is monitored that the player enters the current level or area, the game client or the server automatically generates a preview instruction of the special skill to introduce the skill effect of the special skill to the player.
Based on the preset instruction introduced above, the creation of the black box space in the game scene is triggered. The black box space is in a player-invisible area in the game scene.
Optionally, the black box space is located within a preset range centered on the origin in the game scene. Wherein the origin point generally refers to the birth location of the first virtual character in the game scene. Because the player usually controls the first virtual character to leave the origin, and the first camera (i.e., the virtual camera for shooting the virtual character) moving along with the first virtual character also leaves the origin, the black box space is arranged around the origin, so that the shooting of the space by the first camera can be effectively avoided, and the effect that the player is invisible can be achieved.
Further optionally, in order to acquire an image in the black box space, a virtual camera (for distinction, the virtual camera is also referred to as a second camera) for shooting the space is further provided in the black box space.
In practical application, optionally, the shooting angle of the second camera can be directly specified by the target animation script. Specifically, control parameters (such as camera parameters and spring arm parameters) of the second camera are set in the target animation script, so that the shooting angle of the virtual character scheduled by the second camera for the target animation script is set. The camera parameters are used for setting a shooting visual angle of the second camera, and the spring arm parameters are used for setting a position relation of the second camera relative to the virtual object.
Wherein, the shooting visual angle of the second camera can be controlled and set by the user through the control parameters. In an optional embodiment, it is assumed that the second camera includes a camera assembly and a spring arm assembly, based on which, for the second camera disposed in the black box space, in response to a control parameter input by a user for the second camera, an image capturing view angle of the second camera is adjusted by the control parameter, so that an optical axis of the second camera points to a virtual character in the black box space, and a pitch angle parameter of the optical axis of the second camera and the virtual character in the black box space in the same plane is determined as a target pitch angle parameter, thereby implementing a third nominal view angle for the virtual character.
Or, optionally, the shooting angle of the second camera may also be automatically adjusted according to the virtual character scheduled by the target animation script, for example, to follow the virtual character in the black box space, so as to capture the picture of the virtual character releasing the target skill. In an alternative embodiment, continuing with the assumption that the second camera includes a camera assembly and a spring arm assembly, based on which, for the second camera disposed in the black box space, the position in the black box space of the virtual character scheduled by the target animation script is obtained; and determining control parameters of the second camera according to the position of the virtual character, and adjusting the image acquisition visual angle of the second camera through the control parameters, so that the image acquired by the second camera always contains the virtual character, and the following visual angle for the movement of the virtual character is realized.
In addition to the above examples, the present application may also set the shooting angle of the second camera by other methods, which are not limited to this application, and are not exhaustive here.
It should be noted that the black box space includes game resources preset in the game scene. For example, game assets within the black box space, including but not limited to various game items, scenery, monsters NPC, scenarios NPC, and the like.
In the process of creating the black box space, optionally, game resources irrelevant to the target skill in the black box space are deleted, so that the game scene for displaying the target skill effect is visually distinguished from the common game scene, and the purpose of highlighting the target skill effect is achieved. For example, scenarios NPC, scenes, play items, etc., that are not related to the target skills are masked in black box space.
In another embodiment, part of game resources in the original game scene can be reserved in the black box space, so that the visual effect from the game scene to the black box space is prevented from being excessively split. For example, in the black box space, the brightness of part of game resources is reduced, or the transparency of part of game resources is increased, so that the transition from the game scene to the black box space is more natural through the game resources, and the black box space where the skill effect is introduced is prevented from being excessively separated from the original game scene.
Based on the preset instruction introduced above, a target animation script matched with the preset instruction is selected from animation scripts configured for various skills in the game in advance. Furthermore, after the black box space is created, game resources in the game scene are called based on the target animation script, so that the target animation containing the target skill is rendered in the black box space.
Wherein animation scripts configured for various skills in a game are optionally loaded in a game installation file or a game update file. In practical applications, in an optional embodiment, the process of obtaining the animation script configured for any skill may be to create a skill presentation animation (i.e., a target animation in the present application) in the game engine in advance through the scenario editor, and thereby output an animation script file corresponding to the skill. The file may be in json format, which is a format of animation script file, and the data size is usually about tens of KB, obviously far smaller than the skill introduction video.
Taking the game client as an example, animation scripts of various skills can be stored in the game client through the game installation package, so that after the preset instruction is detected, the game client calls a target animation script matched with the preset instruction.
It should be noted that, the matching here may be understood that the corresponding skill of the target animation script is consistent with the target skill called by the preset instruction intent. Optionally, a corresponding skill tag is marked for the animation script of each skill in advance, so that whether the skill tag marked by the current animation script is consistent with the skill tag contained in the preset instruction or not is judged, and if so, the animation script is determined to be the target animation script matched with the preset instruction.
And then, after the target animation script matched with the preset instruction is determined, executing the target animation script by using a plot animation system in the game so as to call game resources to restore the target animation in the black box space. In an alternative embodiment, invoking a game resource in a game scene based on a target animation script to render a target animation containing a target skill in a black box space may be implemented as:
obtaining game resources required to be called by each frame of image according to the target animation script, wherein the game resources comprise virtual objects in a game scene; rendering the called game resources into a black box space in sequence to obtain an interaction model comprising a virtual object; and acquiring an image of the virtual object executing the target skill from the interactive model through the virtual camera in the black box space to serve as the target animation.
In the steps, the game resources existing in the game are called through the target animation script, so that the computing resources consumed in the target animation playing process can be greatly reduced, and the target animation playing efficiency is further improved.
The virtual objects in the game scene include, but are not limited to: a first virtual character controlled by a player in a game, and/or a second virtual character interacting with the first virtual character. For example, the second virtual character, which may be an opponent NPC encountered by the first virtual character in the game, such as a monster NPC bound in black box space, or a monster NPC bound to a target skill; the second virtual character can also be other virtual characters which the first virtual character has interacted with in the game, such as image models of other players and encountered scenarios NPC. Optionally, the number of virtual objects switches according to the skill type. For example, if the target skill is a one-to-one fighting skill, the number of the second virtual characters is one; the target skill is a group, and the number of the second virtual characters is multiple.
The interaction model containing the virtual object can be set according to the target skill type. For example, the interaction model is a battle process model of a first avatar a and a second avatar b, as shown in fig. 2. In the battle process shown in fig. 2, the first virtual character a uses an attack skill (i.e., a target skill), thereby exhibiting an attack skill effect (including an effect object) in the battle process model. As another example, the interaction model may be a skill delivery model for a first avatar a to treat a second avatar b, as shown in FIG. 3. In the process shown in fig. 3, the first virtual character a uses a therapeutic skill (i.e., a target skill), and thus, a therapeutic skill effect (including an effect object) is exhibited in the model. For another example, the interaction model may be a team battle model in which a first avatar a vocally spells a spell to a second avatar b, as shown in fig. 4. In the process shown in fig. 4, the first avatar a uses a singing skill (i.e., target skill), thereby showing a singing skill effect (including scope, subject of action) in the model. The range of action of the above described techniques is indicated by the circles in the above described figures 2 to 4.
Next, in 104, the second camera shoots the target animation in the black box space, and the shooting result is rendered in the interactive interface of the game. Through the process, as the time axis of the target animation advances, a plurality of images shot by the second camera are rendered one by one in the interactive interface to form an animation effect containing the target skill effect.
In an optional embodiment, at least one frame of image is intercepted from the target animation, and the at least one frame of image comprises the effect of the virtual object on executing the target skill; and further converting the at least one frame of image into at least one preset format map, and rendering the map to the interactive interface.
Wherein, the default format map is assumed to be a Render Texture map. Based on this, in the above steps, converting at least one frame of image into at least one preset format map, and rendering the map into an interactive interface, the steps may be implemented as: converting a plurality of continuous images intercepted from the target animation in real time to obtain a plurality of Render Texture maps; and according to the animation time axis, sequentially using a plurality of Render Texture maps as UI pictures and rendering the UI pictures to an interactive interface of the game client so as to form a skill effect animation corresponding to the target skill. Besides, other formats of the map can be adopted to realize the rendering of the interactive interface.
Through the steps, due to the transfer mode of the Render Texture maps, the skill effect can be conveniently displayed in the UI, only one memory cache occupied by the Render Texture map is added, and the storage space occupied by the game installation package is greatly reduced.
In this way, the game client calls game resources, and the generated Render Texture map is rendered in real time to the interactive interface of the game client, so as to form an animation effect containing the target skill effect.
In this embodiment, because the black box space is in the game scene, the game resources that the target animation called are also in the game scene, can reflect the target skill effect directly perceived through show the target animation in game interactive interface, realize the real-time show to the target skill effect, improve the display efficiency of skill effect greatly, promote player experience. In addition, the storage space occupied by the animation scripts configured for the target skill is far smaller than that of the video files, so that the problem that the game installation files are too large due to the fact that the videos are introduced through the skill storage effect can be effectively solved, and the distribution processing pressure of the game installation files is greatly reduced.
In the above or the following embodiments, optionally, a scenario editor configures a corresponding skill exhibition animation for each skill in the game; and converting the skill display animation corresponding to each skill into an animation script.
The animation scripts of all skills are used for enabling the virtual objects in the game scene to execute the current skills according to the preset skill logic. In practical applications, the animation script matched with each skill includes, but is not limited to, any one or combination of virtual objects for executing the skill, the number of interactive objects corresponding to the skill, the skill action range, the skill effect expression, the hit effect expression and the animation duration.
Specifically, for each skill in the game, a skill presentation animation corresponding to the skill is created in advance within the game engine by the scenario editor. And outputting the skill exhibition animation into an animation script file in a json format for transmission, so that game resources are called according to the animation script file in the game, and the skill exhibition animation (namely the target animation) is restored in a black box space. The animation script file is a logic script which is actually used for calling game resources existing in a game without using extra resources, so that the skill display animation can be restored in the game only by carrying the script file in the game installation package, and the storage space occupied by the game installation package is greatly reduced. Optionally, the animation script of any skill is reconfigured by the scenario editor to update the skill effect of any skill in the game scene, so that hot update of the skill effect introduction model is realized, and the update efficiency of the skill effect introduction model is improved.
In practical applications, the editing form of the skill presentation animation is similar to that of a conventional scenario animation, and optionally, the following information may be configured by the scenario editor: the method comprises the following steps of executing virtual objects of skills, the number of interaction objects conforming to the skills, the skill action range, skill effect expression, hit effect expression, animation duration and arrangement of animation time axes.
In the foregoing or the following embodiments, optionally, for a plurality of different types of skill effects, animation script files for implementing different visual feature effects are respectively created for animation script modules constituting different skill effects according to visual features of the skill effects.
In this embodiment, animation script files corresponding to the same type of skill effect may be further generated in batch according to the skill type. In practical application, for batch creation scenes with the same type of skills, the configuration information can be packaged into logic data modules (namely animation script modules), so that the logic data modules can be mutually called in animation scripts corresponding to the same type of skills, and the animation script configuration efficiency is further improved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 101 to 104 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of steps 103 and 104 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, 103, 104, etc., are merely used to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The skill effect demonstration apparatus of one or more embodiments of the present application will be described in detail below. Those skilled in the art will appreciate that these skill effect display devices can each be constructed using commercially available hardware components configured through the steps taught in the present scheme.
In still another embodiment of the present application, there is also provided a skill effect demonstration apparatus, as shown in fig. 5, comprising:
the creating module 01 is configured to respond to a preset instruction for a target skill in a game, and create a black box space in a game scene, where the black box space is in a player invisible area in the game scene;
the selection module 02 is used for selecting a target animation script matched with the preset instruction from animation scripts which are configured for various skills in the game in advance;
a calling module 03, configured to call a game resource in a game scene based on the target animation script, so as to render a target animation containing the target skill in the black box space;
and the display module 04 is configured to display the target animation in an interactive interface of a game so as to display a skill effect corresponding to the target skill.
Optionally, the system further comprises a configuration module, configured to configure a corresponding skill display animation for each skill in the game through the scenario editor; and converting the skill display animation corresponding to each skill into an animation script.
The animation scripts of all skills are used for enabling the virtual objects in the game scene to execute the current skills according to the preset skill logic.
Optionally, the animation script for each skill matching includes any one or a combination of the following: virtual objects for executing skills, the number of interactive objects corresponding to the skills, the skill action range, the skill effect expression, the hit effect expression and the animation duration.
Optionally, the virtual object in the game scene includes: a first virtual character controlled by a player in a game, and/or a second virtual character interacting with the first virtual character.
Optionally, the black box space is located within a preset range centered on the origin in the game scene.
Optionally, the invoking module 03 invokes, based on the target animation script, a game resource in a game scene, so that when the target animation including the target skill is rendered in the black box space, the invoking module is specifically configured to:
obtaining game resources required to be called by each frame of image according to the target animation script, wherein the game resources comprise virtual objects in a game scene; rendering the called game resources into the black box space in sequence to obtain an interaction model comprising the virtual object; and acquiring an image of the virtual object executing the target skill from the interaction model through a virtual camera in the black box space as the target animation.
Optionally, the display module 04 displays the target animation in an interactive interface of a game, so as to implement display of the target skill effect, and is specifically configured to:
intercepting at least one frame of image from the target animation, wherein the at least one frame of image comprises the effect of the virtual object on executing the target skill; and converting the at least one frame of image into at least one preset format map, and rendering the map to the interactive interface.
Optionally, if the predetermined format map is a Render Texture map, the predetermined format map is a Render Texture map
The presentation module 04 converts the at least one frame of image into at least one preset format map, and renders the at least one preset format map into the interactive interface, specifically configured to:
converting the multi-frame continuous images captured from the target animation in real time to obtain a plurality of Render Texture maps; and according to an animation time axis, sequentially using a plurality of Render Texture maps as UI pictures and rendering the UI pictures to an interactive interface of the game client so as to form a skill effect animation corresponding to the target skill.
Optionally, the display module 04 is further configured to: and responding to control parameters input by a user, and controlling the shooting visual angle of the virtual camera in the black box space.
Optionally, the system further comprises a configuration module, further configured to reconfigure the animation script of any one skill through the scenario editor to update the skill effect of any one skill in the game scene.
In yet another embodiment of the present application, there is also provided an electronic device including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the skill effect display method in the embodiment of the method when executing the program stored in the memory.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like.
For ease of illustration, only one thick line is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The Memory 1130 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor 1110 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (11)

1. A skill effect presentation method, comprising:
responding to a preset instruction aiming at target skills in a game, and creating a black box space in a game scene, wherein the black box space is in a player invisible area in the game scene;
selecting a target animation script matched with the preset instruction from animation scripts configured for various skills in the game in advance;
invoking a game resource in a game scene based on the target animation script to render a target animation containing the target skill in the black box space;
and displaying the target animation in an interactive interface of the game to realize the display of the target skill effect.
2. The method of claim 1, further comprising:
configuring corresponding skill display animations for each skill in the game through a scenario editor;
converting the skill display animation corresponding to each skill into an animation script;
the animation scripts of all skills are used for enabling the virtual objects in the game scene to execute the current skills according to the preset skill logic.
3. The method of claim 2, wherein each skill-matching animation script comprises any one or a combination of the following information:
virtual objects for executing skills, the number of interactive objects corresponding to the skills, the skill action range, the skill effect expression, the hit effect expression and the animation duration.
4. The method of claim 2, wherein the virtual objects in the game scene comprise: a first virtual character controlled by a player in a game, and/or a second virtual character interacting with the first virtual character.
5. The method of claim 1, wherein the black box space is located within a preset range centered at an origin in the game scene.
6. The method of claim 1, wherein invoking a game resource in a game scene based on the target animation script to render a target animation containing the target skill in the black box space comprises:
obtaining game resources required to be called by each frame of image according to the target animation script, wherein the game resources comprise virtual objects in a game scene;
rendering the called game resources into the black box space in sequence to obtain an interaction model comprising the virtual object;
and acquiring an image of the virtual object executing the target skill from the interaction model through a virtual camera in the black box space as the target animation.
7. The method of claim 6, wherein displaying the goal animation in an interactive interface of a game to achieve the presentation of the goal skill effect comprises:
intercepting at least one frame of image from the target animation, wherein the at least one frame of image comprises the effect of the virtual object on executing the target skill;
and converting the at least one frame of image into at least one preset format map, and rendering the map to the interactive interface.
8. The method of claim 7, wherein if the default format map is a Render Texture map, then
The converting the at least one frame of image into at least one preset format map and rendering the at least one preset format map into the interactive interface comprises:
converting the multi-frame continuous images captured from the target animation in real time to obtain a plurality of Render Texture maps;
and according to an animation time axis, sequentially using a plurality of Render Texture maps as UI pictures and rendering the UI pictures to an interactive interface of the game client so as to form a skill effect animation corresponding to the target skill.
9. The method of claim 6, further comprising:
and responding to control parameters input by a user, and controlling the shooting visual angle of the virtual camera in the black box space.
10. The method of claim 1, further comprising:
reconfiguring the animation script of any one skill through the plot editor to update the skill effect of any one skill in the game scene.
11. A skill effect display apparatus, the apparatus comprising:
the creating module is used for responding to a preset instruction aiming at target skills in the game and creating a black box space in the game scene, wherein the black box space is in a player invisible area in the game scene;
the selection module is used for selecting a target animation script matched with the preset instruction from animation scripts configured for various skills in the game in advance;
the calling module is used for calling game resources in a game scene based on the target animation script so as to render a target animation containing the target skill in the black box space;
and the display module is used for displaying the target animation in an interactive interface of a game so as to display the skill effect corresponding to the target skill.
CN202110956065.3A 2021-08-19 2021-08-19 Skill effect display method and device Pending CN115707499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110956065.3A CN115707499A (en) 2021-08-19 2021-08-19 Skill effect display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110956065.3A CN115707499A (en) 2021-08-19 2021-08-19 Skill effect display method and device

Publications (1)

Publication Number Publication Date
CN115707499A true CN115707499A (en) 2023-02-21

Family

ID=85212671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110956065.3A Pending CN115707499A (en) 2021-08-19 2021-08-19 Skill effect display method and device

Country Status (1)

Country Link
CN (1) CN115707499A (en)

Similar Documents

Publication Publication Date Title
CN108619720B (en) Animation playing method and device, storage medium and electronic device
US9764236B2 (en) Creation of game-based scenes
KR102230426B1 (en) Add-on Management
EP3882870B1 (en) Method and device for image display, storage medium and electronic device
CN110062271B (en) Scene switching method, device, terminal and storage medium
WO2022083452A1 (en) Two-dimensional image display method and apparatus for virtual object, and device and storage medium
CN111913624B (en) Interaction method and device for objects in virtual scene
CN110800310A (en) Subtitle processing method and director system for sports game video
EP3936206A1 (en) Method for displaying frames in game application program, apparatus, terminal, and storage medium
CN107529091A (en) Video clipping method and device
US20230325989A1 (en) Image processing method, apparatus, and device, storage medium, and computer program product
WO2023005522A1 (en) Virtual skill control method and apparatus, device, storage medium, and program product
CN113648650B (en) Interaction method and related device
CN112642150B (en) Game picture shooting method, device, equipment and storage medium
CN111330287A (en) Bullet screen display method and device in game, electronic equipment and storage medium
CN104994920A (en) Presenting digital content item with tiered functionality
CN115707499A (en) Skill effect display method and device
WO2019170835A1 (en) Advertising in augmented reality
CN113476837A (en) Image quality display method, device, equipment and storage medium
CN109254660B (en) Content display method, device and equipment
CN112843687B (en) Shooting method, shooting device, electronic equipment and storage medium
Bergant et al. Display of interactive 3D models in augmented reality on mobile devices
CN113318444B (en) Role rendering method and device, electronic equipment and storage medium
WO2023221716A1 (en) Mark processing method and apparatus in virtual scenario, and device, medium and product
CN116650957B (en) Game skill animation playing method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination