WO2023231557A1 - 虚拟对象的互动方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟对象的互动方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023231557A1
WO2023231557A1 PCT/CN2023/085788 CN2023085788W WO2023231557A1 WO 2023231557 A1 WO2023231557 A1 WO 2023231557A1 CN 2023085788 W CN2023085788 W CN 2023085788W WO 2023231557 A1 WO2023231557 A1 WO 2023231557A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
animation
designated
prop
virtual
Prior art date
Application number
PCT/CN2023/085788
Other languages
English (en)
French (fr)
Other versions
WO2023231557A9 (zh
Inventor
邹卓城
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023231557A1 publication Critical patent/WO2023231557A1/zh
Publication of WO2023231557A9 publication Critical patent/WO2023231557A9/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Definitions

  • Embodiments of the present application relate to the field of virtual environment technology, and in particular to a virtual object interaction method, device, equipment, storage medium and program product.
  • fighting games are a relatively popular game, which can display virtual scenes through terminal devices, and users can control virtual objects in Play virtual games with other virtual objects in the virtual scene to win the game.
  • the virtual environment screen corresponding to the virtual scene will display the information of the virtual objects controlled by both players, as well as the attribute values of the virtual objects.
  • the player controls the virtual object and the enemy virtual object controlled by other players plays a game.
  • the attribute value of the enemy virtual object will be reduced to indicate that the enemy virtual object hits this attack and is affected by the attribute value. Reduce the impact.
  • Embodiments of the present application provide a virtual object interaction method, device, equipment, storage medium and program product, which are used to improve the diversity of interactive display methods and the interactivity between virtual objects.
  • the technical solutions are as follows:
  • a virtual object interaction method includes:
  • control the first virtual object and the second virtual object to perform interactive activities in the virtual scene
  • the conversion and drop animation refers to the animation of the special effect text element being converted into a designated prop and falling into the virtual scene.
  • a virtual object control device includes:
  • a display module used to display the first virtual object and the second virtual object in the virtual scene
  • a receiving module configured to control the first virtual object and the second virtual object to perform interactive activities in the virtual scene in response to the interactive operation;
  • the display module is further configured to display special effect text elements in the virtual scene, where the special effect text elements correspond to the interaction results between the first virtual object and the second virtual object;
  • the display module is also used to display the conversion and drop animation of the special effect text element.
  • the conversion and drop animation refers to the animation in which the special effect text element is converted into a designated prop and dropped into the virtual scene.
  • a terminal device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the computer program is loaded and executed by the processor to implement the above interactive method for virtual objects. .
  • a computer-readable storage medium is provided, and a computer program is stored in the computer-readable storage medium.
  • the computer program is loaded and executed by a processor to implement the above interactive method of virtual objects.
  • a computer program product including a computer program stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the terminal device executes the above-mentioned interaction method of the virtual object.
  • the special effect text element is displayed according to the interaction result, and the special effect text element is converted into a designated prop in the form of a conversion drop animation, so that the interaction result and the interaction result are
  • the visualization of feedback benefits increases the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • the interaction results into designated props, it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • Figure 1 is a schematic diagram of an interaction method for virtual objects provided by related technologies
  • Figure 2 is a schematic diagram of an interaction method for virtual objects provided by an exemplary embodiment of the present application
  • Figure 3 is a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
  • Figure 4 is a schematic diagram of the solution implementation environment provided by an exemplary embodiment of the present application.
  • Figure 5 is a flow chart of an interaction method for virtual objects provided by an exemplary embodiment of the present application.
  • Figure 6 is a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of a method for displaying special effect text element content provided by an exemplary embodiment of the present application.
  • Figure 8 is a schematic diagram of a method for displaying special effect text element content provided by another exemplary embodiment of the present application.
  • Figure 9 is a schematic diagram of a designated prop generation process provided by another exemplary embodiment of the present application.
  • Figure 10 is a schematic diagram of a designated prop generation process provided by another exemplary embodiment of the present application.
  • Figure 11 is a schematic diagram of the designated prop picking process provided by an exemplary embodiment of the present application.
  • Figure 12 is a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application.
  • Figure 13 is a schematic diagram of a gain effect display method provided by an exemplary embodiment of the present application.
  • Figure 14 is a schematic diagram of attribute value-added animation provided by an exemplary embodiment of the present application.
  • Figure 15 is a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application.
  • Figure 16 is a schematic diagram of a first movement animation provided by an exemplary embodiment of the present application.
  • Figure 17 is a schematic diagram of a gain selection interface provided by another exemplary embodiment of the present application.
  • Figure 18 is a schematic diagram of the props integration process provided by an exemplary embodiment of the present application.
  • Figure 19 is a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application.
  • Figure 20 is a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application.
  • Figure 21 is a structural block diagram of a virtual object interaction device provided by an exemplary embodiment of the present application.
  • Figure 22 is a structural block diagram of a virtual object interaction device provided by another exemplary embodiment of the present application.
  • Figure 23 is a structural block diagram of a terminal device provided by an exemplary embodiment of the present application.
  • FIG. 1 is a schematic diagram of an interaction method for virtual objects provided by related technologies.
  • a virtual scene is implemented as a game scene 100 as an example.
  • the game scene 100 includes a first virtual object 110 controlled by a player, and a second virtual object 120 controlled by other players.
  • a virtual object 110 and a second virtual object 120 engage in a virtual game.
  • the first virtual object 110 uses an attack skill to continuously hit the second virtual object 120
  • a combo label 130 corresponding to the continuous hit of the attack skill is displayed in the game scene 100, where the combo label 130 is implemented to describe the current use of the first virtual object 110.
  • the number of consecutive hits of the attack skill on the second virtual object 120 (for example, the number of consecutive hits is 5 times) is used to display the result of the game between the first virtual object 110 and the second virtual object 120 .
  • Figure 2 shows a schematic diagram of a virtual object interaction method provided by an exemplary embodiment of the present application.
  • the virtual scene 200 includes a first virtual object 210. and the second virtual object 220, in response to the player's interactive operation, displaying the process of performing specified interactive activities between the first virtual object 210 and the second virtual object 220, wherein the specified interactive activities can be implemented as the first virtual object 210 and the second virtual object 220.
  • the two virtual objects 220 use skills to play a game.
  • the interaction result between the first virtual object 210 and the second virtual object 220 can be realized as follows: when the first virtual object 210 attacks the second virtual object 220 by using skills and hits the second virtual object 220, The special effect text element 230 is displayed at the designated position corresponding to the second virtual object 220.
  • the current special effect text element 230 is implemented as a "single press", which is used to indicate that the attack of the first virtual object 210 hits the second virtual object 220 for the first time.
  • the virtual scene 200 can also display the conversion and drop animation of the special effect text element 230.
  • the conversion and drop animation is: the word "single press" is converted into the designated prop 240 and falls into the virtual scene 200.
  • the player can pass The picking operation controls the first virtual object 210 to pick up the designated prop 240 in the virtual scene 200 .
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and
  • the special effect text elements are converted into designated props in the form of conversion and falling animations, so that the interaction results and the feedback benefits of the interaction results are visualized, thereby improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • the efficiency of transmitting information displayed on the interface can be improved.
  • the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the technical solutions provided by the embodiments of the present application can be implemented solely through the terminal device, or the technical solutions provided by the embodiments of the present application can be implemented solely through the server, or the technical solutions provided by the embodiments of the present application can be implemented through the terminal device. It is implemented jointly with the server, which is not limited in the embodiments of this application.
  • the embodiment of this application takes the implementation by a terminal device alone as an example.
  • the terminal device runs a target application program that supports a virtual environment.
  • the target application program may be a stand-alone version of the application program, such as a stand-alone version. version of the 3D game program, or it can be an online application, a network application, etc.
  • the terminal device displays a virtual scene, and the virtual scene includes the first virtual object and The second virtual object, in the process of causing the first virtual object and the second virtual object to perform specified interactive activities according to the interactive operation, the client of the target application program displays a display based on the interaction result between the first virtual object and the second virtual object.
  • the special effect text element is displayed, and the special effect text element is converted into a conversion and falling animation of the specified prop falling into the virtual scene.
  • the user can control the first virtual object to pick up the specified prop in the virtual scene through a picking operation on the terminal.
  • the terminal device can be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, Moving Picture Experts Compression Standard Audio Layer 3) player, MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) players and other electronic equipment.
  • MP3 Motion Picture Experts Group Audio Layer III, Moving Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • FIG. 3 shows a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
  • the electronic device 300 includes: an operating system 320 and an application program 322.
  • Operating system 320 is the basic software that provides applications 322 with secure access to computer hardware.
  • Application 322 is an application that supports virtual environments.
  • the application 322 is an application that supports a three-dimensional virtual environment.
  • the application 322 may be a virtual reality application, a three-dimensional map program, an auto-chess game, a puzzle game, a fighting game, a third-person shooting game (Third-Person Shooting Game, TPS), a first-person shooting game (First- Person Shooting Game (FPS), multiplayer online tactical competitive game (Multiplayer Online Battle Arena Games, MOBA), any one of the multiplayer gun battle survival games.
  • the application program 322 may be a stand-alone version of an application program, such as a stand-alone version of a three-dimensional game program, or may be a network-connected version of an application program, which is not limited in the embodiments of the present application.
  • FIG. 4 shows a schematic diagram of a solution implementation environment of an embodiment of the present application.
  • the implementation environment includes a terminal device 410 , a server 420 and a communication network 430 , where the terminal device 410 and the server 420 are connected through the communication network 430 .
  • the terminal device 410 runs a target application 411 that supports virtual scenes. Taking fighting games as an example, as shown in Figure 4, when the current target application is implemented as an online version of the application, the terminal device 410 currently displays a virtual scene 4110 corresponding to the target application 411, and the virtual scene 4110 includes the first virtual scene 4110. Object 4111, and a second virtual object 4112 that performs specified interactive activities with the first virtual object 4111. In response to the interactive operation on the first virtual object 4111 and the second virtual object 4112, the terminal device 410 displays the first virtual object 4111 and the second virtual object 4112. The interaction process of two virtual objects 4112. The terminal device 410 generates an interaction result triggering instruction based on the interaction result between the first virtual object 4111 and the second virtual object 4112, and sends it to the server 420.
  • the server 420 After receiving the interaction result triggering instruction from the terminal device 410, the server 420 determines the text content of the special effect text element 4121 corresponding to the interaction result according to the interaction result triggering instruction, and feeds back the element rendering data corresponding to the special effect text element 4121 to the terminal device. 410, wherein the element rendering data includes rendering sub-data of the special effect text element 4121 and animation sub-data corresponding to the drop transition animation corresponding to the special effect text element 4121.
  • the terminal device 410 After receiving the element rendering data, the terminal device 410 displays the corresponding special effect text element 4121 according to the rendering sub-data of the special effect text element 4121, and displays the conversion and drop animation corresponding to the special effect text element 4121 according to the animation sub-data, wherein the conversion and drop animation
  • the special effects text element 4121 is converted into a designated prop and dropped into the virtual scene 4110.
  • the terminal device 410 In response to the first virtual object 4111's picking operation for the designated prop, the terminal device 410 displays the animation process of the first virtual object 4111 picking up the designated prop 4122.
  • the server 420 may be used to provide background services for clients of target applications (such as game applications) in the terminal device 410 .
  • the server 420 may be a backend server of the above-mentioned target application (such as a game application).
  • the above-mentioned server 420 can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
  • CDN Content Delivery Network
  • the above-mentioned server 420 can also be implemented as a node in the blockchain system.
  • the embodiment of the present application can display a prompt interface, pop-up window or output voice prompt information before collecting the user's relevant data and during the process of collecting the user's relevant data.
  • the prompt interface, pop-up window or voice prompt information The information is used to remind the user that their relevant data is currently being collected, so that this application only starts to perform the relevant steps of obtaining the user's relevant data after obtaining the user's confirmation operation on the prompt interface or pop-up window, otherwise (that is, the user's relevant data is not obtained)
  • the relevant steps of obtaining user-related data are completed, that is, the user-related data is not obtained.
  • Figure 5 shows a flow chart of an interaction method for virtual objects provided by an embodiment of the present application.
  • the method is explained by taking the method applied to the terminal device 410 shown in Figure 4 as an example. , the method includes the following steps:
  • Step 510 Display the first virtual object and the second virtual object in the virtual scene.
  • the above-mentioned virtual scene refers to the scene displayed (or provided) when the client of the application (such as a game application) is running on the terminal device.
  • the virtual scene refers to the scene created for virtual objects to carry out activities (such as game play).
  • the scene can be a virtual house, virtual island, virtual sky, virtual land, etc.
  • the virtual scene may be a simulation scene of the real world, a semi-simulation and semi-fictional scene, or a purely fictitious scene, which is not limited in the embodiments of the present application.
  • Virtual objects may refer to virtual objects controlled by a user account in an application (such as a game application). Taking a game application as an example, the virtual object may refer to a virtual character controlled by the user account in the game application.
  • the above-mentioned first virtual object may refer to a virtual character controlled by the currently logged-in user account in the client.
  • the second virtual object may be controlled by the client or by other user accounts. This is not the case in the embodiment of the present application. limited.
  • the client displays a virtual scene in the user interface.
  • the virtual scene includes a first virtual object.
  • the first virtual object can perform virtual activities in the virtual scene.
  • the virtual activities can include walking, running, jumping, and climbing. , releasing skills, picking up props, throwing props and other activities.
  • the virtual scene may also include a second virtual object, and there is a hostile relationship between the second virtual object and the first virtual object, or there is a teammate relationship between the second virtual object and the first virtual object, or there is a teammate relationship between the second virtual object and the first virtual object, or there is a teammate relationship between the second virtual object and the first virtual object.
  • the first virtual object or the second virtual object can be implemented as a virtual character, a virtual object, a virtual animal, a virtual building, etc., which is not limited in the embodiments of the present application.
  • Step 520 In response to the interactive operation, control the first virtual object and the second virtual object to perform interactive activities in the virtual scene.
  • Interactive operations may refer to operations that enable interaction between virtual objects, and this operation can be implemented by the user through a terminal device.
  • the interactive operation may refer to the interactive operation of the user of the current terminal device on the first virtual object.
  • the client After receiving the interactive operation, the client can control the first virtual object and the second virtual object to perform interactive activities in the virtual scene according to the interactive operation.
  • the client obtains interactive operations based on interactive operation instructions triggered by the user.
  • the user can touch the display screen to generate interactive operation instructions for virtual objects.
  • the user can also control devices (such as keyboard, mouse, etc.) by operating. game controller, etc.) to generate interactive operation instructions for virtual objects, which is not limited in the embodiments of the present application.
  • the above-mentioned interactive operation instructions may include interactive operation instructions triggered by the first user for the first virtual object and triggered by the second user for the second virtual object.
  • interactive activities may refer to activities that require interaction between virtual objects.
  • the interactive activity can be implemented as a virtual game (such as a game game) between the first virtual object and the second virtual object, or the first virtual object and the second virtual object jointly complete a designated task, etc.
  • the application examples do not limit this.
  • a virtual game may refer to a game in which virtual objects compete.
  • the interactive activity when the interactive activity is implemented as a virtual game between the first virtual object and the second virtual object, the interactive operation may be implemented as the first virtual object releasing skills to the second virtual object, or using virtual props to The second virtual object attacks.
  • the interactive operation may be implemented as the first virtual object sending a task invitation to the second virtual object, so that the first virtual object and the second virtual object Virtual objects work together to perform specified tasks.
  • the activity content of the interactive activity is preset in advance; or, the user can freely set the specific activity content of the interactive activity, which is not limited in the embodiments of the present application.
  • Step 530 Display a special effect text element in the virtual scene, where the special effect text element corresponds to the interaction result between the first virtual object and the second virtual object.
  • Interaction results refer to the results of the above-mentioned interactive activities.
  • the first virtual object hits the second virtual object multiple times in succession.
  • the interaction result can be obtained by the real-time client in real time, that is, the special effect text element corresponding to the interaction result is also updated and displayed in real time.
  • Special effect text elements refer to view elements obtained by applying special effects to text elements, such as text fill effects (such as solid color fill, gradient fill, etc.), stroked text effects (such as text overlay, neon light effects, etc.), fade text effects, Dynamic text effects, etc., that is, the display mode of special effect text elements can be determined based on the special effects applied.
  • text fill effects such as solid color fill, gradient fill, etc.
  • stroked text effects such as text overlay, neon light effects, etc.
  • fade text effects Dynamic text effects, etc.
  • Dynamic text effects Dynamic text effects
  • the display content refers to the text content of the special effect text element, for example: the text content of the special effect text element can be determined based on the text content of the current interaction result; the display mode refers to the display method of the element corresponding to the special effect text element, such as: highlight display , flashing display, etc.; the display quantity refers to the number of elements of the special effect text element, such as: displaying one special effect text element at a time according to the interaction result; the display position refers to the position of the special effect text element when it is displayed in the virtual scene, such as: The designated position corresponding to one virtual object (such as above the head), the designated position corresponding to the second virtual object (such as above the head), etc.; the display duration refers to the display time of the special effects text element, such as: the time of a single special effects text element in the virtual scene The display duration is 3 seconds.
  • the interaction result between the first virtual object and the second virtual object corresponds to a single fixed special effect text element; or, the interaction result may correspond to multiple different types of special effect text elements, which is not included in the embodiment of the present application. limited.
  • the special effect text element can be implemented as a fixed display, that is, the same special effect text element is displayed every time; or, the display of the special effect text element corresponds to the interaction result, that is, different interaction results correspond to different special effect text elements. .
  • Step 540 Display the conversion and drop animation of the special effect text element.
  • the conversion and drop animation refers to the animation in which the special effect text element is converted into a designated prop and dropped into the virtual scene.
  • the client generates a transition and drop animation based on the special effect text element, and displays the transition and drop animation in the user interface.
  • the special effect text element when the special effect text element changes, the special effect text element can be converted into a transition and falling animation to start displaying.
  • the conversion drop animation can be used to describe the conversion process between the special effect text element and the specified prop, as well as the process in the virtual scene of the specified prop drop value. That is, in the current virtual scene, the generation method of the specified prop relies on Special effects text elements.
  • the designated props may refer to any virtual props, such as attack virtual props, defense virtual props, energy value acquisition props, skill virtual props, gain virtual props (such as restoring health points), etc., which are not limited in the embodiments of the present application. .
  • the transition falling animation may include at least one of the following animation display methods:
  • the client starts to display the animation of converting the special effects text element into the specified prop and dropping it into the virtual scene;
  • the client will only display the conversion and drop animation of the special effect text element after receiving the conversion trigger operation for the special effect text element.
  • the conversion trigger operation is used to trigger the conversion drop animation of the special effect text element. For example, after receiving the conversion trigger operation for the special effect text element, the client starts to generate the corresponding conversion drop animation based on the special effect text element and displays The transition drops animation.
  • converting the special effects text element into the specified prop may mean directly replacing the special effect text element with the specified prop, or it may refer to canceling the display of the special effect text element and adding the specified prop to be displayed at a set position, such as at the top of the virtual scene, on the virtual In the middle position of the scene, etc., you can also cancel the display of special effect text elements and display the animation of the specified prop entering the virtual scene. For example, if the virtual scene breaks, the specified prop enters the virtual scene from the crack. This is not limited in the embodiment of the present application.
  • the conversion method of specifying props may include at least one of the following expressions:
  • the client determines a specified number of designated props based on the special effects text elements, that is, different special effects text elements are converted into different numbers of designated props;
  • the client determines the specified props of the specified type based on the special effects text elements, that is, different special effects text elements correspond to Transform into different types of designated props;
  • the client determines the conversion effect of the designated prop based on the special effect text element. That is, different special effect text elements have different conversion forms. For example, the special effect text element A is converted into the corresponding designated prop one by one in fonts and dropped to In the virtual scene, as a transition falling animation.
  • the process of converting special effect text elements into designated props can be implemented as follows: the special effect text elements are converted into a designated number of designated props in sequence, and the designated props are displayed one by one and dropped into the virtual scene in turn, that is, the designated props are
  • the conversion process of props is to convert and display them one by one; or, the client converts the special effects text elements into a preset number of designated props at the same time, and causes the preset number of designated props to drop into the virtual scene at the same time, that is, the special effects text
  • the conversion process of elements into designated props is completed at one time, and a preset number of designated props are displayed at the same time, which is not limited in the embodiments of the present application.
  • the way in which the specified props are dropped into the virtual scene may include at least one of the following ways:
  • the specified props are preset with a fixed drop position. After the specified props are generated, they will fall in the direction of the fixed drop position, and finally fall at the fixed drop position.
  • the multiple designated props fall to the same fixed position in the virtual scene, or the multiple designated props fall to different locations in the virtual scene. Location is not limited in the embodiments of this application.
  • the user can control the virtual object to pick up the specified prop.
  • the client controls the first virtual object to pick up the specified prop in the virtual scene.
  • the designated prop when the first virtual object actively completes the interaction with the second virtual object (such as the first virtual object hits the second virtual object), the designated prop produces a designated gain effect on the first virtual object, and the first Virtual objects can obtain corresponding designated gain effects by picking up designated props. This can stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects.
  • the specified gain effect can be set and adjusted according to actual usage requirements, such as health recovery, energy value increase, attack damage increase, etc., which are not limited in the embodiments of the present application.
  • the designated props can be implemented as usable props. After the first virtual object picks up the designated props, the designated props can be used to interact with the second virtual object.
  • the specified props can also be implemented as special effect props. After the first virtual object picks up the specified props, the client displays the special effects corresponding to the specified props.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the interactive activity when the interaction result of the interactive activity is implemented to correspond to a variety of different special effect text elements, the interactive activity may include multiple activity stages, and each activity stage corresponds to a staged interaction result. A single staged interaction The result corresponds to a single effect text element.
  • Figure 6 shows a flow chart of an interaction method for virtual objects provided by another exemplary embodiment of the present application. That is, step 540 in the above embodiment also includes step 541, and step 530 also includes Including step 531, as shown in Figure 6, the method includes the following steps:
  • Step 510 Display the first virtual object and the second virtual object in the virtual scene.
  • the first virtual object is a virtual object controlled by the current terminal device.
  • the first virtual object and the second virtual object belong to the same type of virtual object, for example: the first virtual object and the second virtual object are both virtual characters; or the first virtual object and the second virtual object belong to different types.
  • Virtual objects for example, the first virtual object is implemented as a virtual character, and the second virtual object is implemented as a virtual beast or virtual object, which is not limited in the embodiments of the present application.
  • Step 520 In response to the interactive operation, control the first virtual object and the second virtual object to perform interactive activities in the virtual scene.
  • the operation mode of the interactive operation may include at least one of the following operation modes:
  • the interactive operation is implemented by controlling the first virtual object to perform activities in the virtual scene through the current terminal device:
  • the interactive operation when the interactive activity is implemented as a virtual game between the first virtual object and the second virtual object, the interactive operation may be implemented as the client controls the first virtual object in response to an attack triggering operation against the first virtual object. Attack the second virtual object by fighting or releasing skills; or, when the interactive activity is implemented as the first virtual object and the second virtual object jointly complete a designated task, the interactive operation can be implemented as instructing the first virtual object Send a task invitation to the second virtual object.
  • An interactive activity list is displayed in the user interface.
  • the interactive operation is implemented by selecting a specified interactive activity in the interactive activity list, and the client displays the animation of the first virtual object and the second virtual object performing the specified interactive activity.
  • Step 531 Display a special effect text element corresponding to the staged interaction result at a designated position corresponding to the second virtual object in the virtual scene.
  • the text content of the special effect text element corresponding to the staged interaction result corresponds to the staged interaction result.
  • the staged interaction result refers to the interaction result under the activity stage, such as the interaction result under the current activity stage.
  • a special effect text element corresponding to the staged interaction result can also be displayed at a designated position corresponding to the first virtual object in the virtual scene. This can make the user's eyes always focus on the virtual object, making it easier to increase the user's concentration on interactive activities, thus improving the user's interactive experience.
  • the interactive activity includes multiple activity stages, and the interaction result of each activity stage is regarded as the staged interaction result. That is, the staged interaction result is used to represent the first virtual object and the second virtual object during the interactive activity.
  • the interaction results corresponding to the current activity stage such as: when the interactive activity is implemented as a virtual game between the first virtual object and the second virtual object in the current round, the first virtual object attacks the second virtual object each time The time process corresponds to an activity stage, so the single hit result of the second virtual object in the current round is the staged interaction result.
  • different staged interaction results correspond to the text content of different special effect text elements.
  • the display method of the special effect text element corresponding to the m+1th staged interaction result includes at least one of overlay display, replacement display and other display methods.
  • the special effects text element corresponding to the mth staged interaction result is not cancelled.
  • the two special effect text elements are displayed in different positions. Among them, m is a positive integer.
  • interactive activities are implemented as virtual games.
  • the client receives the interactive operation and controls the first virtual object to release skills to the second virtual object to attack according to the interactive operation.
  • the client will display the special effect text element corresponding to the hit result according to the hit result.
  • Figure 7 shows a schematic diagram of a method for displaying special effects text element content provided by an exemplary embodiment of the present application.
  • the user interface displays a virtual scene 700.
  • the special effect text element 730 "single press" is displayed above the first virtual object 710 to use means that the first skill hit of the first virtual object 710 is achieved in the current round.
  • the skill is released to the second virtual object 720 again, and the skill also hits the second virtual object.
  • 720 that is, in the current round, the first virtual object 710 hits the second virtual object 720 twice in succession by releasing skills, then the current special effect text element 730 "Single Press" is replaced and displayed with the special effect text element 740 "Double Press".
  • the text content of the special effect text elements corresponding to different staged interaction results is different, which can enrich the interactive display mode and give the user a sense of accomplishment of a successful attack, thereby improving the user interaction experience.
  • the first virtual object 710 hits the second virtual object 720 for the first time, it releases skills to the second virtual object 720 twice. If among the two skills released, the second skill hits the second virtual object 720 again, that is, in the current round, the first virtual object 710 hits the second virtual object 720 twice with skills (but not consecutive hits) , then when the second skill is released and hits the second virtual object 720, the current special effect text element 730 "Single Press" is replaced and displayed with the special effect text element 740 "Double Press".
  • the designated task includes multiple phased tasks, and the first virtual object and the second virtual object are implemented as teammates.
  • the first virtual object and the second virtual object jointly complete the designated task.
  • the client displays the special effect text element corresponding to the first phased task.
  • the client displays the special effect text element corresponding to the second phased task. . That is, special effect text elements are used to indicate the completion of the current phased task.
  • the designated task is for the first virtual object and the second virtual object to jointly defeat multiple different types of virtual monsters.
  • Figure 8 shows a schematic diagram of a method for displaying special effect text element content provided by an exemplary embodiment of the present application.
  • the current virtual scene 800 includes a first virtual object 810 and a first virtual object 810.
  • Two virtual objects 820 the designated task is implemented as the first virtual object 810 and the second virtual object 820 jointly attack the first object 830 and the second object 840, and any one of the first virtual object 810 and the second virtual object 820 defeats the first
  • a special effect text element 850 "Monster 1 is successfully defeated!” is displayed above the first virtual object 810 or the second virtual object 820 to indicate that the first virtual object 810 and the second virtual object 820 are defeated.
  • the phased task of the first object 830 has been completed.
  • a special effect text element 860 "Monster 2 is successfully defeated! is displayed above the first virtual object 810 or the second virtual object 820.
  • the text content of the special effect text element corresponds to the defeated object.
  • the text content of the special effect text element is constructed based on the name of the defeated object.
  • Step 541 Display the conversion and falling animation of the special effect text element based on the specified number of specified props.
  • the conversion and drop animation refers to the animation in which special effects text elements are converted into a specified number of specified props and dropped into the virtual scene.
  • the specified number corresponds to the text content of the effect text element.
  • the specified number refers to the special effects text element conversion When specifying props, specify the corresponding quantity of the props.
  • the specified quantity of the specified prop corresponds to the text content of the special effect text element
  • the text content of the special effect text element corresponds to the staged interaction result, that is, the specified quantity corresponds to obtaining the staged interaction result.
  • different staged interaction results correspond to special effect text elements with different text contents, so the specified number of designated props converted by each special effect text element is also different. This can enrich the display methods of transition and drop animations, stimulate users' interest in obtaining different transition and drop animations, and thus help improve user stickiness.
  • Figure 9 shows a schematic diagram of the specified prop generation process provided by an exemplary embodiment of the present application.
  • the user interface displays a virtual scene 900.
  • the first virtual object 910 During the attack of the second virtual object 920, when the second virtual object 920 is hit for the first time, the client displays the special effect text element 930 "Single Press", where the first hit result is regarded as the stage of the current round.
  • the first hit result will be realized by converting the special effects text element 930 "Single Press” into a designated prop 940, that is, the client displays the special effects text element 930 "Single Press” into a designated prop 940. And the transition animation from falling to the virtual scene.
  • Figure 10 shows a schematic diagram of the specified prop generation process provided by another exemplary embodiment of the present application.
  • the user interface displays a virtual scene 1000.
  • the first virtual object 1010 During the attack by the second virtual object 1020, when the second virtual object 1020 is hit twice in succession (the first hit process is not shown in Figure 10, please refer to Figure 9 for the first hit process).
  • the special effects text element 1030 "Double Press" is displayed on the terminal. The result of two consecutive hits is regarded as the phased interaction result of the current round. The second hit result will be realized as converting the special effects text element 1030 "Double Press" into 2 designated props.
  • Figure 10 that is, the special effects text element 1030 displayed on the client side "double press” is converted into two designated props 1040, and then falls to the conversion and falling animation in the virtual scene.
  • Figure 10 also includes a designated prop 1050 that is converted and dropped by the "single press" of the displayed special effect text element after the first virtual object 1010 hits the second virtual object 1020 for the first time.
  • the conversion and falling animation corresponding to the special effect text element is displayed, and a specified number of specified props are dropped in the virtual environment.
  • the client displays the special effect text element according to the k-th staged interaction result
  • the designated prop corresponding to the k-1th staged interaction result remains displayed or cancels the display; or, when the client displays the special effect text element according to the k-th staged interaction result
  • the interactive result displays a special effect text element
  • the special effect text element corresponding to the k-1th staged interaction result is converted into a designated prop.
  • the embodiment of the present application does not limit this, where k is a positive integer.
  • the conversion and dropping animation of special effects text elements may refer to the animation of converting two-dimensional special effects text elements into two-dimensional specified props and then dropping them; or, the conversion and dropping animation of special effects text elements may refer to the animation of converting two-dimensional special effects text elements into The 3D special effects text elements are converted into 3D designated props and dropped animation.
  • the conversion and drop animation when the conversion and drop animation is implemented as converting a two-dimensional special effect text element into a specified prop, it may include the following: the client displays the shrinking and disappearing animation of the special effect text element, and the shrinking and disappearing animation refers to the special effect text
  • the animation of canceling the display after the element shrinks at the specified position corresponding to the second virtual object obtains the first coordinate of the specified position in the world coordinate system of the virtual scene as the starting coordinate of the specified prop; obtains the first coordinate of the specified position in the world coordinate system with The second coordinate corresponding to the first coordinate is used as the landing coordinate of the specified prop; the falling path data of the specified prop is obtained based on the first coordinate and the second coordinate; and the conversion and falling animation of the specified prop falling is displayed based on the falling path data.
  • the shrinking and disappearing animation of the special effect text element is first displayed, and then the transition and drop animation is displayed.
  • the special effect text element starts to shrink, the shrinking and disappearing animation begins.
  • the second coordinate is different from the first coordinate.
  • the second coordinate may refer to a certain coordinate located on the virtual ground in the virtual scene, such as a coordinate close to the first virtual object or the second virtual object.
  • the first coordinate can be implemented as a two-dimensional coordinate; or, the first coordinate can be implemented as a three-dimensional coordinate, which is not limited in the embodiments of the present application.
  • the first coordinate can be used to implement the starting coordinate for the specified prop to fall, that is, at the first coordinate, the special effect text element is converted into the specified prop and starts to fall, and the second coordinate is implemented to be the end position for the specified prop to fall.
  • the client determines it as the final landing position of the specified prop in the virtual scene.
  • the client obtains the drop path data corresponding to the specified prop according to the first coordinate and the second coordinate, so as to describe the drop path of the specified prop falling into the virtual scene.
  • the client can also determine the texture material set corresponding to the specified prop.
  • the texture material set includes a variety of texture materials corresponding to the specified prop.
  • the texture material is used to describe the view from different angles through the camera. Material images obtained after shooting specified props.
  • the client can obtain the texture material image corresponding to the observation angle from the texture material set according to the observation angle corresponding to the specified prop in the drop path data, where the observation angle refers to the first-person perspective or the third-person perspective corresponding to the current terminal device, according to Different observation angles can obtain different texture material images of the specified props. For example, if the observation angle is 45 degrees northwest, then the texture material image of the specified prop corresponding to the 45-degree northwest angle will be obtained in the texture material set.
  • the client displays the texture material image of the specified prop along the falling trajectory corresponding to the falling path data. That is, based on the observation angle of the specified prop, the client obtains the texture material image corresponding to the observation angle from the texture material set, and displays the texture material image along the drop trajectory corresponding to the drop path data as a conversion of the specified prop falling Drop animation.
  • the user can control the virtual object to pick up the specified prop.
  • the client controls the first virtual object to pick up the specified prop in the virtual scene.
  • the operation mode of the picking operation may include at least one of the following modes:
  • the picking operation can be implemented by controlling the first virtual object to pick up at least one designated prop in the virtual scene;
  • the picking operation can be implemented as a triggering operation on a designated prop dropped in the virtual scene, displaying the first virtual object to automatically pick up the triggered designated prop, and treating the triggering operation as a picking operation.
  • Figure 11 shows a schematic diagram of the designated prop picking process provided by an exemplary embodiment of the present application.
  • the user interface displays a virtual scene 1100, and the virtual scene 1100 includes a conversion drop.
  • the client controls the first virtual object 1120 to pick up one of the designated props 1110.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the user by converting the special effect text element into a specified number of specified props, and the specified number corresponds to the text content of the special effect text element, the user can perceive the number of specified props after obtaining the interaction result. This will help improve the user's sense of accomplishment and experience.
  • the corresponding special effects text elements are displayed according to the staged interaction results corresponding to each activity stage, and the text content of the special effects text elements is combined with the staged interaction
  • the results correspond allows users to increase their enthusiasm for participating in interactive activities.
  • the interaction method in which the growth of the designated number is related to the staged interaction results is realized, and the interaction methods between virtual objects are enriched. Diversity.
  • the client displays the added value corresponding to the first virtual object.
  • Benefit animation that is, the specified prop produces a specified gain effect on the first virtual object.
  • Figure 12 shows a flow chart of a virtual object interaction method provided by another exemplary embodiment of the present application. As shown in Figure 12, the method may include the following steps:
  • Step 1210 Display the first virtual object and the second virtual object in the virtual scene.
  • Step 1220 in response to the interactive operation, control the first virtual object and the second virtual object to perform interactive activities in the virtual scene.
  • step 1210 is the same as that of step 510, and the introduction of step 1220 is the same as that of step 520.
  • step 1220 is the same as that of step 520.
  • Step 1230 Display a special effect text element in the virtual scene, where the special effect text element corresponds to the interaction result between the first virtual object and the second virtual object.
  • the interactive activity includes multiple activity stages, the i-th activity stage corresponds to the i-th special effect text element, and i is a positive integer.
  • the interactive activity includes multiple different activity stages, and there is a progressive relationship between each activity stage.
  • the multiple activity stages correspond to the number of times the second virtual object is hit during the first virtual object attacking the second virtual object. For example: in the current round, when the second virtual object is hit for the first time, In the current round, the interaction process between the two parties starting a virtual game and being hit for the first time is regarded as the first activity stage. When the second virtual object is hit for the second time, the first hit in the current round will be The interaction process with the second hit is regarded as the second activity stage. Therefore, there is a progressive relationship between the second activity stage and the first activity stage in terms of the number of hits.
  • each activity stage corresponds to a staged interaction result.
  • the first staged interaction result corresponding to the above-mentioned first activity stage is "the second virtual object is hit for the first time.”
  • the second phase interaction result corresponding to the two activity phases is "the second virtual object is hit for the second time”.
  • the client can display special effect text elements according to the staged interaction results, where the text content of the special effects text elements corresponds to the staged interaction results, such as: the special effects text displayed corresponding to the above-mentioned first stage interaction results.
  • the element is implemented as "single press” (that is, one hit in a row), and the special effects text element displayed corresponding to the above-mentioned second-stage interaction result is implemented as "double press” (that is, hit twice in a row), and the text content of the special effects text element is used to describe Corresponding staged interaction results.
  • Step 1240 In response to the display duration of the special effect text element reaching a specified duration threshold, display the transition and drop animation of the special effect text element.
  • the display duration of a special effect text element refers to the length of time the special effect text element is displayed in the virtual environment.
  • the specified duration threshold can be a preset fixed value; or the user can freely adjust the specified duration threshold, which is not limited in the embodiments of the present application.
  • the display method of the transition falling animation includes at least one of the following methods:
  • Interactive activities include multiple activity stages, each activity stage corresponds to a single special effect text element, and each special effect text element is displayed independently, such as: the display process of the first special effect text element corresponding to the first activity stage (has not reached the specified duration threshold), in response to the end of the second activity phase, the client displays the second special effect text element based on the phased interaction results corresponding to the second activity phase. At this time, the first special effect text element and the third special effect text element The two special effect text elements are displayed independently in the virtual environment, and the display of the second special effect text element will not affect the display duration of the first special effect text element until the display duration of the first special effect text element reaches its corresponding specified duration threshold.
  • the client displays the drop transformation and drop animation of the first special effect text element, and the display method of the transformation and drop animation corresponding to the second special effect text element is the same as above. Therefore, there will be a transformation of the first special effect text element in the current virtual environment.
  • the interactive activity includes multiple activity stages, each activity stage corresponds to a single special effect text element, and each special effect text element is displayed as a replacement. For example: in the first activity stage, the corresponding first special effect text element is in During the display process (the specified duration threshold has not been reached), in response to the second activity phase generating phased activity results, the client will change the first special effect text element The element is replaced and displayed with the second special effect text element corresponding to the second activity stage, and the first special effect text element is cancelled.
  • the client displays the conversion and drop animation of the second special effect text element, Therefore, there are only specified props that are converted and dropped by the second special effect text in the virtual environment. That is, the client responds that the display duration of the i-th special effect text element reaches the specified duration threshold and does not receive any items within the specified duration threshold range.
  • the phased interaction results of the i+1th activity stage show the transition and drop animation of the i-th special effect text element. According to the progressive relationship between activity stages, the corresponding conversion and drop animations are updated and displayed sequentially, which is helpful to motivate users to trigger different activity stages, thereby increasing user stickiness.
  • the above specified duration threshold is for a single special effect text element, that is, when the first special effect text element is replaced and displayed with a second special effect text element, the second special effect text element.
  • the current display duration of the element is calculated again; alternatively, the above specified duration threshold is for the entire interactive activity, and is calculated starting from the first special effect text element. If there is a second special effect text element, the first special effect text element is replaced and displayed. situation, and there is no situation where the third special effect text element replaces and displays the second special effect text element, then the specified duration threshold refers to the sum of the display duration of the first special effect text element and the second special effect text element, which is not limited in the embodiment of the present application. .
  • the designated props converted into special effect text elements corresponding to each activity stage belong to the same type of props; or, the designated props converted into special effect text elements corresponding to each activity stage belong to different types of props.
  • the embodiment of the present application Not limited.
  • Step 1250 in response to the picking operation, control the first virtual object to pick up the specified prop in the virtual scene.
  • step 1250 is the same as that introduced in the above embodiment, and will not be described again here.
  • Step 1260 Display the gain animation corresponding to the first virtual object.
  • the gain animation refers to the animation in which the first virtual object picks up the designated prop and generates the designated gain effect corresponding to the designated prop.
  • the designated gain effect is related to the number of designated props picked up by the first virtual object; or, the designated gain effect is related to the type of designated props picked up by the first virtual object.
  • the client when the client starts to display the specified gain effect for the first virtual object, it starts to display the corresponding gain animation.
  • the specified gain effect can be used to increase the attribute effect of the first virtual object, where the attribute effect includes at least one of the first virtual object's health value, energy value, mana value, defense value, attack ability, character level, etc. kind.
  • the first virtual object picks up the specified prop it can use the specified prop.
  • the specified prop is converted into an interactive prop for the first virtual object to use.
  • the specified gain effect is preset with an effect duration threshold. When the specified gain effect reaches the effect duration threshold, the specified gain effect disappears; or the specified gain effect is implemented as a continuous gain effect, that is, the specified gain effect will not Automatically disappear, which is not limited in the embodiments of this application.
  • the picking operation is used to cause the first virtual object to pick up one designated prop at a time; or, the picking operation is used to cause the first virtual object to pick up multiple designated props at a time.
  • the specified prop has a preset display duration threshold in the virtual scene. When the display duration of the specified prop in the virtual scene reaches the display duration threshold, the specified prop will be automatically cancelled, making it impossible to pick up the first virtual object; or, The specified gain effect of the specified prop has a preset effect threshold in the virtual scene. When the display time of the specified prop in the virtual scene reaches the effect threshold, the specified prop will not be cancelled, but the specified prop will no longer have the specified gain effect, or the specified prop will no longer have the specified gain effect.
  • the effect type of the gain effect changes, which is not limited in the embodiments of this application.
  • a single designated prop corresponds to a single designated gain effect, that is, after the first virtual object picks up the designated prop, a corresponding designated gain effect is generated; or a single designated prop corresponds to multiple candidate gain effects, and the first virtual object After the object picks up the designated prop, it selects at least one gain effect among multiple candidate gain effects; or, after the first virtual object continuously picks up at least two designated props, the two designated props will affect the first virtual object.
  • Object creation combination Gain effect that is, two designated props each have their own designated gain effect on the first virtual object, but after both designated props are picked up, a combined gain effect will be generated, and the combined gain effect corresponds to the two designated props.
  • the specified gain effects are different, and the embodiments of this application are not limited to this.
  • the gain animation is related to the designated gain effect corresponding to the designated prop picked up by the first virtual object.
  • the client displays the gain animation corresponding to the picked up specified prop; or, after the first virtual object continuously picks up multiple specified props, the client displays multiple
  • the gain animation corresponding to each designated prop is not limited in the embodiment of the present application.
  • the expression form in which the specified gain effect is related to the number of specified props includes at least one of the following forms:
  • the virtual scene includes multiple designated props, and the multiple designated props correspond to the same type of gain effect, then the more designated props picked up by the first virtual object, the greater the designated gain effect produced by the picked up designated props.
  • the virtual scene includes prop a (the gain effect is health value + 10), prop b (the gain effect is health value + 5) and prop c (the gain effect is health value + 15), if the first virtual object is picked up If prop a and prop b are used, the specified gain effect will be health value +15. If the first virtual object picks up prop a, prop b and prop c, the specified gain effect will be health value +30;
  • the gain effect of the designated gain effect corresponds to the number of designated props. That is, if the number of designated props picked up by the first virtual object reaches the quantity threshold, a gain effect corresponding to the quantity threshold will be generated for the first virtual object. , for example: by default, picking up 2 designated props can increase the mana value by 5 points, picking up 15 designated props can increase the mana value by 15 points, and when the virtual scene includes 20 designated props, 3 designated props are picked up in the first virtual object. In the case of props, the first virtual object will have a designated gain effect that increases the mana value by 5 points. When the number of designated props picked up by the first virtual object reaches 15, the first virtual object will have a mana value increase of 15 points. The designated gain effect (an additional 10 mana points will be added on top of the 5 mana points);
  • the generation time of the designated gain effect is related to the number of designated props picked up. According to different numbers of designated props, the corresponding gain effect generation time is preset, that is, when the first virtual object continuously picks up more designated props, , the faster the corresponding specified gain effect is generated. For example: when the first virtual object continuously picks up 3 specified props, the health value will be increased by 30 points in 0.5 seconds. When the first virtual object continuously picks up 5 specified props, Then the health value will be increased by 30 points in 0.2 seconds.
  • the expression form of the specified gain effect related to the type of the specified prop includes at least one of the following forms:
  • the first virtual object If the virtual scene includes multiple designated props, and each of the multiple designated props corresponds to different types of gain effects, the first virtual object generates different types of designated gain effects by picking up different types of designated props, such as: virtual scene It includes prop A (the gain effect is magic value + 10) and prop B (the gain effect is defense value + 5). After the first virtual object picks up prop A, the magic value can be increased by 10 points, or the first virtual object picks up prop B. Afterwards, the defense value can be increased by 5 points;
  • the virtual scene includes multiple designated props, and each of the multiple designated props corresponds to different types of gain effects. However, by default, there is a composite relationship between at least two designated props, that is, when the first virtual object continuously picks up the After at least two designated props, the combined gain effect corresponding to the at least two designated props will be generated.
  • the virtual scene includes Prop 1 (the gain effect is force value + 10) and prop 2 (the gain effect is defense value + 10). and prop 3 (the gain effect is health + 10), when the first virtual object continuously picks up prop 1, prop 2 and prop 3, the specified gain effect is that the character level of the first virtual object increases by 1 level, but if If the first virtual object only picks up prop 1, the specified gain effect will only be force value +10.
  • the gain animation is implemented by highlighting the specified gain effect in the peripheral range of the first virtual object.
  • the designated gain effect corresponding to the designated prop is implemented by increasing the attribute value of the first virtual object (such as at least one of health value, force value, defense value, etc.).
  • the specified gain effect is highlighted in the peripheral range of the first virtual object to express the specified gain effect produced by the current first virtual object by picking up the specified prop.
  • This process is used as a gain animation.
  • Figure 13 shows a schematic diagram of a gain effect display method provided by an exemplary embodiment of the present application. As shown in Figure 13(a), the user interface currently displays a virtual scene 1300.
  • the object 1310 picks up the designated prop 1320 in the virtual scene 1300, and the designated gain effect corresponding to the designated prop 1320 is to resist all attacks within 5 seconds.
  • the client displays the gain animation of the first virtual object 1310 in the virtual scene 1300, where the gain The animation is implemented by highlighting the defense effect in the peripheral range of the first virtual object 1310 (the defense effect is represented by a dotted line in Figure 13(a)), and the duration is 5 seconds.
  • the gain animation is implemented by displaying text content specifying the gain effect at a set position of the first virtual object.
  • the client is at the set position of the first virtual object.
  • the text content corresponding to the increased attribute value is displayed as a gain animation.
  • the user interface currently displays the virtual scene 1300.
  • the client displays the gain animation of the first virtual object 1310 in the virtual scene 1300, where the gain animation is implemented to be displayed in the center of the torso of the first virtual object 1310 Specify the text content corresponding to the buff effect "Health +10".
  • the virtual scene also includes an attribute slot corresponding to the first virtual object.
  • the attribute slot includes an attribute value, and the attribute value is used to describe a situation in which the first virtual object possesses attributes.
  • the attribute slot corresponding to the first virtual object is used to represent the real-time attribute value possessed by the first virtual object during the interactive activity with the second virtual object.
  • the attribute slot of the first virtual object is The health value slot (the full value of the health value slot is 100 points)
  • the above attribute value is the corresponding real-time health value of the first virtual object in the current activity stage during the interaction between the first virtual object and the second virtual object (the current interaction
  • the activity is a virtual game. During the game, if the current first virtual object hits a normal attack of the second virtual object, the real-time health value of the first virtual object is 90 points. Among them, the attack result of hitting a normal attack is life. value is reduced by 10 points).
  • the client also displays the attribute value-added animation corresponding to the first virtual object.
  • the attribute value-added animation refers to the animation in which the attribute value increases from the initial attribute value to the target attribute value with the gain animation.
  • the difference between the initial attribute value and the target attribute value is The increase in attribute value between periods is related to the specified gain effect.
  • Property increment animations can also be displayed as part of gain animations.
  • the attribute value-added animation is used to describe the attribute increase amount that produces a specified gain effect corresponding to the specified prop after the first virtual object picks up the specified prop.
  • the attribute slot of the first virtual object is implemented as a health slot, and its initial attribute value is 50 health points (full health value is 100 points).
  • the client while displaying the gain animation of the first virtual object, also displays the animation of the health value in the health value slot of the first virtual object increasing from 50 points to 70 points.
  • the health value increase of 20 points is the designated gain effect corresponding to the designated props.
  • Figure 14 shows a schematic diagram of an attribute value-added animation provided by an exemplary embodiment of the present application.
  • the user interface displays an attribute slot 1410 corresponding to the first virtual object, and the attribute slot 1410 implements is the health value slot, in which the attribute slot 1410 corresponds to a health value of 50 points.
  • the initial attribute value when the first virtual object picks up the specified prop, and the specified gain effect of the specified prop is "health value + 20 points", then During the display process of the gain animation of the first virtual object (not shown in Figure 14), the client displays the attribute value-added animation of the attribute slot 1410, where the attribute value-added animation is represented by the initial health value "50 points" in the attribute slot 1410. Grows to the target health value of "70 points”.
  • the attribute value in the attribute slot is used to represent the energy value obtained by the first virtual object through the specified prop. This energy value can be used to obtain gain effects, such as obtaining an additional skill, increasing attack power, and increasing defense power. wait.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props to visualize the interactive results and the feedback benefits of the interactive results, thus improving It improves user interaction experience and increases the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • a specified duration threshold is set for the display duration of the special effects text elements, so that the corresponding duration of the special effects text elements corresponds to different activity stages.
  • Special effect text elements can be displayed together or replaced, which enriches the diversity of display methods of special effect text elements.
  • the designated gain effect is determined according to the number and type of the designated props, so that the designated gain effect has multiple different types of effects, which improves the diversity of the gain effect display content.
  • the specified prop not only produces a designated gain effect on the first virtual object, but can also produce a debuff effect on the second virtual object.
  • Figure 15 shows a flow chart of a virtual object interaction method provided by another exemplary embodiment of the present application. As shown in Figure 15, the method includes the following steps:
  • Step 1510 In response to the display duration of the designated prop reaching the prop display threshold, display a first movement animation.
  • the first movement animation is an animation in which the designated prop automatically moves to the first virtual object.
  • the prop display threshold is used to indicate the length of display time after the specified prop is dropped into the virtual scene.
  • the prop display threshold can be a preset fixed value, or the user can freely adjust the range of the prop display threshold according to actual needs, which is not limited in the embodiments of the present application.
  • the prop display threshold corresponds to the activity stage. If the current activity stage ends and the next activity stage begins, the client displays the specified items in the current activity stage. The props automatically move to the first virtual object, thus ensuring that in the next activity stage, the first virtual object obtains the designated gain effect corresponding to the designated props dropped in the previous activity stage.
  • the designated prop automatically moves toward the first virtual object, it indicates that the first movement animation has started to be displayed.
  • the animation expression form of the first movement animation includes at least one of the following forms:
  • the first moving animation is implemented as an animation in which multiple designated props automatically move toward the first virtual object one by one;
  • the first moving animation is implemented as an animation in which multiple designated props automatically move to the first virtual object at the same time.
  • the multiple designated props automatically move to the same designated position corresponding to the first virtual object, such as: the multiple designated props move to the torso of the first virtual object.
  • the prop 2 automatically moves to the torso of the first virtual object, and the prop 3 automatically moves to the legs of the first virtual object.
  • Figure 16 shows a schematic diagram of the first movement animation provided by an exemplary embodiment of the present application.
  • multiple designated props 1610 are dropped in the current virtual scene 1600.
  • the designated prop 1610 automatically moves to the first virtual object 1620 as the first movement animation.
  • the client displays a gain selection interface at the target position of the specified prop, and the gain selection interface includes at least two candidate gain effects corresponding to the specified prop; in response to selecting at least two candidates In the trigger operation of the specified gain effect in the gain effect, the client displays the gain animation corresponding to the first virtual object, and the gain animation corresponds to the specified gain effect.
  • a single designated prop corresponds to multiple different types of candidate gain effects.
  • the gain selection corresponding to the designated prop is displayed in the virtual scene.
  • the gain selection interface includes at least two candidate gain effects. The user can select a specified gain effect from the candidate gain effects, and the client displays a gain animation corresponding to the first virtual object according to the specified gain effect.
  • the client in response to a triggering operation for a specified gain effect for one of the at least two candidate gain effects, displays a gain animation corresponding to the specified gain effect; or, in response to a trigger operation for a plurality of the at least two candidate gain effects
  • the client displays the corresponding gain effect animations of multiple specified gain effects; or, in response to the continuous triggering operation of multiple specified gain effects in at least two candidate gain effects, the client displays the multiple specified gain effects.
  • the specified gain effects are combined to generate a combined gain effect, and a gain animation corresponding to the combined gain effect is displayed, which is not limited in the embodiments of the present application.
  • Figure 17 shows a schematic diagram of a gain selection interface provided by an exemplary embodiment of the present application.
  • the virtual scene 1700 includes a first virtual object 1710 and dropped designated props. 1720.
  • the display duration of the designated prop 1720 reaches the prop display threshold, the designated prop 1720 automatically moves to the first virtual object (not shown in Figure 17).
  • the client displays a gain selection interface 1730, where the gain selection interface 1730 will include at least two candidate gain effects (Figure 17 shows three candidate gain effects, namely "health +10", “force value” +5" and “Defense value +20"), in response to the trigger operation for the specified gain effect "Defense value +20" among at least two candidate gain effects, the client displays the corresponding value of the specified gain effect "Defense value +20" Gain animation 1740.
  • the client in response to the specified prop being dropped, displays an automatic gain animation corresponding to the first virtual object when the specified prop contacts the first virtual object.
  • the automatic gain animation refers to the first virtual object contacting the specified After adding a prop, an animation of the specified gain effect corresponding to the specified prop is generated.
  • the designated prop touches the first virtual object, such as: during the process of the designated prop falling, the designated prop touches the first virtual object. If the head of the virtual object comes into contact, the client displays the automatic gain animation of the first virtual object.
  • the automatic gain animation refers to an animation that produces a designated gain effect corresponding to the designated prop on the first virtual object after the designated prop comes into contact with the first virtual object.
  • Step 1520 in response to the display duration of the designated prop reaching the prop display threshold, display a second movement animation.
  • the second movement animation is an animation in which the designated prop automatically moves to the second virtual object, and the designated prop produces a debuff on the second virtual object. Effect.
  • the prop display threshold in step 1520 may be the same as or different from the prop display threshold in step 1510. Whether the specified prop automatically moves to the first virtual object or the second virtual object may be fixed or may occur randomly, and this is not limited in the embodiments of the present application. Optionally, when the specified prop starts to automatically move toward the second virtual object, it indicates that the second movement animation has started to be displayed.
  • the animation expression form of the second movement animation includes at least one of the following forms:
  • the second movement animation is implemented as an animation in which multiple designated props automatically move toward the second virtual object one by one;
  • the second movement animation is implemented as an animation in which multiple designated props automatically move to the second virtual object at the same time.
  • the debuff effect is opposite to the gain effect.
  • the gain effect is health value +10
  • the debuff effect can be realized as health value -10.
  • the designated prop when the designated prop produces the first movement animation, the designated gain effect that the designated prop can produce on the first virtual object, and when the designated prop produces the second movement animation, the designated prop
  • the debuff effect that can be produced on the second virtual object such as: if the specified prop is implemented to increase the health value of the first virtual object by 10, the designated prop can be implemented to reduce the health value of the second virtual object by -10; optionally , the gain effects and debuff effects corresponding to the specified props may not correspond, and they can be set and adjusted according to actual use requirements, which is not limited in the embodiments of the present application.
  • the multiple specified props when there are multiple specified props in the virtual scene, the multiple specified props produce a debuff effect on the second virtual object as a whole.
  • prop 1 makes the second virtual object have an attack power of -10
  • prop 2 makes the second virtual object have an attack power of -10.
  • the defense power of the second virtual object is -20; or, when there are multiple designated props in the virtual scene, the multiple designated props can produce different debuff effects for different parts of the second virtual object.
  • the embodiment of the present application has this Not limited.
  • the client also displays the integrated props in the virtual scene in response to the prop integration operation, and the prop integration operation is used to indicate the selection of at least two specified props for integration; wherein, in response to the prop integration operation, the client displays the integrated props in the virtual scene. At least two specified props are integrated to generate an integrated prop, and the integrated prop is displayed in the virtual scene.
  • the client before the first virtual object picks up the specified prop (or during the picking process), the client responds to the prop integration operation and integrates at least two specified props in the virtual scene, so that the at least two specified props are integrated A single integrated prop is displayed, that is, the current picking operation may instruct the first virtual object to pick up the integrated prop.
  • the volume of the integrated props can be realized as the sum of the volumes of all specified props used for integration.
  • Figure 18 shows a schematic diagram of the prop integration process provided by an exemplary embodiment of the present application.
  • the virtual scene 1800 includes a first virtual object 1810 and a plurality of designated props ( That is, designated props 1821, designated props 1822, and designated props 1823), the client responds to the prop integration operation for the designated props 1822 and designated props 1823, selects the designated props 1822 and the designated props 1823 for integration, and obtains the integrated props 1820.
  • the client The integrated props 1820 are displayed in the virtual scene 1800 .
  • the client can also receive a prop triggering operation, which is used to trigger a specified prop to release a specified skill effect within the skill range; after receiving the prop triggering operation, the client displays a skill effect animation, and the skill Effect animation refers to the animation in which specified props release specified skill effects within the skill range.
  • the designated props are implemented as props that have the effect of releasing designated skills.
  • the designated skill effects may include at least one of attack skills, defense skills, etc.
  • the prop triggering operation can be implemented by controlling the first virtual object to attack the designated prop.
  • the animation of the designated prop releasing the designated skill effect within the preset skill range is displayed as a Skill effect animation.
  • the client can also display an interactive playback animation at a specified position in the virtual scene, where the interactive playback animation refers to the playback animation of the above-mentioned interactive activity.
  • the specified interactive activity includes multiple activity stages
  • the phased interaction result corresponding to the current activity stage is generated between the first virtual object and the second virtual object in each activity stage, if the first virtual object The object and the second virtual object start the next activity phase, and the client displays the playback animation of the first virtual object and the second virtual object corresponding to the previous activity phase at a designated position in the virtual scene.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thereby improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the designated props are implemented to correspond to at least two candidate gain effects.
  • the user can be provided with more options for designated gain effects, improving User’s interactive interests.
  • the designated prop automatically produces a designated gain effect on the first virtual object, thereby improving the efficiency of human-computer interaction.
  • multiple specified props can be integrated into integrated props through the prop integration operation, which facilitates the picking up of the first virtual object and improves the picking efficiency of the first virtual object.
  • the information corresponding to the specified prop is enriched. Prop effects.
  • the user can playback and view the interactive activities in the previous activity stage, thereby improving the efficiency of human-computer interaction.
  • the client every time the first virtual object attacks and hits the second virtual object, the client (or server) will perform a tag judgment, such as the client determines based on how many hits the current skill is.
  • a tag judgment such as the client determines based on how many hits the current skill is.
  • Step 1910 release the skill.
  • the current virtual environment includes a first virtual object and a second virtual object.
  • the first virtual object and the second virtual object engage in a virtual game in the virtual environment.
  • the player controls the first virtual object to release skills to the second virtual object to deal with the second virtual object.
  • a virtual object is attacked, which is the above-mentioned interactive activity.
  • the client When the player controls the first virtual object to release a skill to the second virtual object and hits the second virtual object, the client first determines whether the hit is blocked by the second virtual object, where blocking refers to the release of the first virtual object. The skill hits the second virtual object, but the second virtual object uses a blocking skill to prevent it from being damaged by the skill. If the hit skill is not blocked by the second virtual object, the client then determines whether the special effect text element has been displayed in the current virtual scene, where the special effect text element is implemented as a hit label. When the first virtual object releases a skill and hits the second virtual object, and the skill hit is not blocked by the second virtual object, it is determined that an activity phase is completed in the current round, and the phased interaction result corresponding to the activity phase is the first virtual object. The subject releases the skill and hits the second virtual object.
  • Step 1920 When no tag is hit, the "single key” font is displayed.
  • the client can, based on the hit situation in the above step 1910, in the virtual scene
  • the "single press” font is displayed in the “single press” font. This "single press” font is used to indicate that the first virtual object hits the second virtual object for the first time in the current round. From the beginning of the current round to the first hit, it is the third virtual object in the current round. An activity stage.
  • Step 1930 When there is a “Single Press” hit label, the “Double Press” font is displayed.
  • the client can determine the hit according to the hit in step 1910 above.
  • the "double press” font is displayed in the virtual scene.
  • the "double press” font is used to indicate that the first virtual object hits the second virtual object for the second time in the current round, and the number of pairs from the first hit to the second hit is game process, as the second activity stage within the current round.
  • Step 1940 When there is a “Double Press” hit tag, the “Triple Press” font is displayed.
  • the client can determine the hit status in step 1910 above. , display the "three-bet" font in the virtual scene.
  • the "three-bet” font is used to indicate the third time the first virtual object hits the second virtual object in the current round, and the game from the second hit to the third hit. process, as the third activity stage within the current round.
  • the completion of the above interaction process means that the first virtual object and the second virtual object complete the game process in the current round.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • each level's hit label has a corresponding display duration. If a hit occurs again during the display of the hit label, the next level's hit label will be displayed. If the hit label fails to be displayed within the display duration, If the skill hits again, the special effect text element will be canceled and all hit tags will be cleared; each time the display duration of the special effect text element reaches the display duration threshold, it will be converted into energy crystals and dropped. After the first virtual object is picked up, it will be The first virtual object produces a designated gain effect, and different special effect text elements correspond to different numbers of designated props.
  • Figure 20 shows a flow chart of an interactive method for virtual objects provided by another exemplary embodiment of the present application, as shown in Figure 20 As shown in 20, the method includes the following steps:
  • the virtual scene includes a first virtual object and a second virtual object.
  • the client controls the first virtual object and the second virtual object to perform interactive activities in the virtual scene, where the interactive activities are implemented as a virtual pairing between the two.
  • the client determines whether a hit tag already exists in the current virtual scene.
  • Step 2020 display the "mono" font.
  • the client When there is no special effect text element in the virtual scene, that is, the client does not display any hit tags, the client displays the special effect text element "mono-press" font above the first virtual object.
  • the "single press" font is used to represent the first virtual object releasing a skill that hits the second virtual object for the first time in the current virtual scene.
  • Step 2021 add the "Single Bet” tag, and convert the “Single Bet” into 1 energy crystal to drop.
  • the client When the "single press" font of the special effects text element is displayed in the virtual scene, the client adds a "single press” label to the virtual scene and displays the conversion and drop animation.
  • the conversion and drop animation is implemented by converting the special effects text element into the "single press” font.
  • the font "" is converted into designated props and dropped into the virtual scene. Among them, the designated props are realized as 1 energy crystal.
  • Step 2030 display the "double press” font.
  • the "double press" font is used to represent the situation in the current virtual scene when the first virtual object releases a skill and hits the second virtual object for the second time.
  • Step 2031 Add the "Double Press” tag and convert the “Double Press” into 2 energy crystals to drop.
  • the client When the "Double Press” font of the special effects text element is displayed in the virtual scene, the client adds a "Double Press” label to the virtual scene and displays the conversion and drop animation.
  • the conversion and drop animation is implemented as the "Double Press” font of the special effects text element.
  • "press” font is converted into designated props and dropped into the virtual scene. Among them, the designated props are realized as 2 energy crystals.
  • Step 2040 Display the "three-key” font.
  • the "three-press" font is used to represent the situation in the current virtual scene when the first virtual object releases a skill and hits the second virtual object for the third time.
  • Step 2041 Add the "Triple Bet” tag and convert the “Double Bet” into 3 energy crystals to drop.
  • the client When the special effects text element "Three Press” font is displayed in the virtual scene, the client will add the "Three Press” label to the virtual scene and display the conversion and drop animation.
  • the conversion and drop animation is implemented as the special effects text element "Three Press”.
  • "press” font is converted into designated props and dropped into the virtual scene. Among them, the designated props are realized as 3 energy crystals.
  • Step 2050 pick up the energy crystal.
  • both the first virtual object and the second virtual object can pick up the energy crystal, where the energy crystal produces different effects on the first virtual object and the second virtual object.
  • Step 2060 The second virtual object picks up the energy crystal.
  • the energy crystal When the second virtual object picks up an energy crystal, the energy crystal will have a debuff effect on the second virtual object; Or, the energy crystal does not produce any effect on the second virtual object.
  • Step 2070 The first virtual object picks up the energy crystal.
  • the energy crystal When the first virtual object picks up the energy crystal, the energy crystal will produce a specified gain effect on the first virtual object.
  • Step 2080 end.
  • the virtual object interaction method displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the technical solution provided by the embodiments of the present application can increase the hit feedback when the player hits a skill, and strengthens the player's perception of the gain effect after hitting the skill, thereby improving the user interaction experience.
  • Figure 21 is a structural block diagram of a virtual object interaction device provided by an exemplary embodiment of the present application.
  • the device may include the following parts: a display module 2110 and a receiving module 2120.
  • the display module 2110 is used to display the first virtual object and the second virtual object in the virtual scene;
  • the receiving module 2120 is configured to control the first virtual object and the second virtual object to perform interactive activities in the virtual scene in response to the interactive operation.
  • the display module 2110 is also configured to display special effect text elements in the virtual scene, where the special effect text elements correspond to the interaction results between the first virtual object and the second virtual object.
  • the display module 2110 is also used to display the conversion and drop animation of the special effect text element.
  • the conversion and drop animation refers to the animation in which the special effect text element is converted into a designated prop and dropped into the virtual scene. .
  • the display module 2110 includes: a display unit 2111.
  • the display unit 2111 is configured to display the conversion and falling animation of the special effect text element based on the specified number of the designated props.
  • the conversion and falling animation means that the special effect text element is converted into a designated number of designated props and dropped. to the animation in the virtual scene; wherein the specified number corresponds to the text content of the special effect text element.
  • the interactive activity includes multiple activity stages
  • the display unit 2111 is also configured to display a special effect text element corresponding to the staged interaction result at a designated position corresponding to the second virtual object in the virtual scene.
  • the text content of the special effect text element correspond to the interaction results under the activity stage.
  • the display module 2110 also includes: an acquisition unit 2112.
  • the obtaining unit 2112 is used to obtain the specified number corresponding to the staged interaction result.
  • the display module 2110 is further configured to display the transition and drop animation of the special effect text element in response to the display duration of the special effect text element reaching a specified duration threshold.
  • the interactive activity includes multiple activity stages, the i-th activity stage corresponds to the i-th special effect text element, and i is a positive integer;
  • the display module 2110 is also configured to respond to the display duration of the i-th special effect text element reaching the specified duration threshold, and no response corresponding to the i+1-th activity stage is received within the specified duration threshold range.
  • the phased interaction result displays the conversion and drop animation of the i-th special effect text element.
  • the specified prop produces a specified gain effect on the first virtual object
  • the receiving module 2120 is also used to control the movement of the first virtual object in the virtual scene in response to the picking operation. Pick up the specified props.
  • the display module 2110 is also used to display the gain animation corresponding to the first virtual object.
  • the gain animation means that after the first virtual object picks up the designated prop, it generates the designated gain corresponding to the designated prop. animation of the effect; wherein the designated gain effect is related to the number of the designated props picked up by the first virtual object; or the designated gain effect is related to the designated props picked up by the first virtual object related to the type.
  • the virtual environment includes an attribute slot corresponding to the first virtual object, and the attribute slot includes an attribute value
  • the display module 2110 is also used to display the attribute value-added animation corresponding to the first virtual object.
  • the attribute value-added animation refers to the animation in which the attribute value increases from the initial attribute value to the target attribute value.
  • the initial attribute value The amount of attribute value increase between the value and the target attribute value is related to the specified gain effect.
  • the display module 2110 is also used to:
  • a gain selection interface at the target position of the designated prop, and the gain selection interface includes at least two candidate gain effects corresponding to the designated prop;
  • a gain animation corresponding to the first virtual object is displayed, the gain animation corresponding to the designated gain effect.
  • the display module 2110 is further configured to display an automatic icon corresponding to the first virtual object in response to the specified prop coming into contact with the first virtual object during the falling process of the specified prop.
  • Gain animation refers to the animation that generates a designated gain effect corresponding to the designated prop after the first virtual object contacts the designated prop.
  • the receiving module 2120 is further configured to display integrated props in the virtual scene in response to a prop integration operation, where the prop integration operation is used to instruct the selection of at least two specified props for integration.
  • the receiving module 2120 is also used to:
  • the prop trigger operation is used to trigger the specified prop to release the specified skill effect within the skill range;
  • Display skill effect animation which refers to the animation of the specified prop releasing the specified skill effect within the skill range.
  • the display module 2110 is also configured to display a first moving animation in response to the display duration of the designated prop reaching the prop display threshold, where the first moving animation is the automatic movement of the designated prop to the Animation of the movement of the first virtual object.
  • the display module 2110 is also configured to display a second movement animation in response to the display duration of the designated prop reaching the prop display threshold, where the second movement animation is the automatic movement of the designated prop to the An animation of movement of the second virtual object, and the designated prop produces a debuff effect on the second virtual object.
  • the display module 2110 is also used to:
  • shrinking and disappearing animation refers to the animation of canceling the display of the special effect text element after shrinking at a designated position corresponding to the second virtual object
  • the display module 2110 is also used to:
  • the texture material image is displayed along the falling trajectory corresponding to the falling path data.
  • the virtual object interaction device displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • the interactive device for virtual objects provided in the above embodiments is only exemplified by the division of the above functional modules.
  • the above function allocation can be completed by different functional modules as needed, that is, the internal functions of the device
  • the structure is divided into different functional modules to complete all or part of the functions described above.
  • the interactive device for virtual objects and the interactive method for virtual objects provided in the above embodiments belong to the same concept. The specific implementation process can be found in the method embodiments and will not be described again here.
  • the embodiments of this application may also include the following content:
  • An interaction method for virtual objects the method is executed by a terminal device, the method includes:
  • control the first virtual object and the second virtual object to perform interactive activities in the virtual scene
  • the conversion and drop animation refers to the animation of the special effect text element being converted into a designated prop and falling into the virtual scene.
  • the conversion and drop animation of the special effect text element is displayed based on the specified number of the designated props.
  • the conversion and drop animation means that the special effect text element is converted into a designated number of designated props and dropped into the virtual scene. animation;
  • the specified number corresponds to the text content of the special effect text element.
  • the staged interaction results refer to the interaction results under the activity stage.
  • the transition and drop animation of the special effect text element is displayed.
  • the interactive activity includes multiple activity stages, the i-th activity stage corresponds to the i-th special effect text element, and i is a positive integer;
  • displaying the conversion and drop animation of the special effect text element includes:
  • the i-th special effect text element In response to the display duration of the i-th special effect text element reaching the specified duration threshold and the phased interaction result corresponding to the i+1-th activity stage not being received within the specified duration threshold, the i-th special effect text element is displayed.
  • the transition drop animation for i special effects text elements In response to the display duration of the i-th special effect text element reaching the specified duration threshold and the phased interaction result corresponding to the i+1-th activity stage not being received within the specified duration threshold, the i-th special effect text element is displayed.
  • the transition drop animation for i special effects text elements In response to the display duration of the i-th special effect text element reaching the specified duration threshold and the phased interaction result corresponding to the i+1-th activity stage not being received within the specified duration threshold, the i-th special effect text element is displayed.
  • the transition drop animation for i special effects text elements In response to the display duration of the i-th special effect text element reaching the specified duration threshold and the phased interaction result corresponding
  • the method further includes:
  • the gain animation refers to the animation that generates the designated gain effect corresponding to the designated prop after the first virtual object picks up the designated prop;
  • the designated gain effect is related to the number of the designated props picked up by the first virtual object; or, the designated gain effect is related to the type of the designated props picked up by the first virtual object.
  • the displaying the gain animation corresponding to the first virtual object includes:
  • the attribute value-added animation refers to the animation in which the attribute value increases from the initial attribute value to the target attribute value. Between the initial attribute value and the target attribute value The increase in attribute value is related to the specified gain effect.
  • a gain selection interface at the target position of the designated prop, and the gain selection interface includes at least two candidate gain effects corresponding to the designated prop;
  • a gain animation corresponding to the first virtual object is displayed, the gain animation corresponding to the designated gain effect.
  • the specified prop comes into contact with the first virtual object, and the automatic gain animation corresponding to the first virtual object is displayed.
  • the automatic gain animation refers to the first virtual object. After the object contacts the designated prop, an animation of the designated gain effect corresponding to the designated prop is generated.
  • the integrated props are displayed in the virtual scene, and the prop integration operation is used to instruct the selection of at least two specified props for integration.
  • the prop trigger operation is used to trigger the specified prop to release the specified skill effect within the skill range;
  • Display skill effect animation which refers to the animation of the specified prop releasing the specified skill effect within the skill range.
  • a first movement animation is displayed, where the first movement animation is an animation in which the designated prop automatically moves to the first virtual object.
  • a second movement animation is displayed, the second movement animation is an animation in which the designated prop automatically moves to the second virtual object, and the designated prop is The second virtual object produces the debuff.
  • shrinking and disappearing animation refers to the animation of canceling the display of the special effect text element after shrinking at a designated position corresponding to the second virtual object
  • the texture material image is displayed along the falling trajectory corresponding to the falling path data.
  • the virtual object interaction device displays special effect text elements according to the interaction results during the interaction between the first virtual object and the second virtual object, and converts the falling animation.
  • the form converts special effect text elements into designated props, allowing the interaction results and the feedback benefits of the interaction results to be visualized, thus improving the user interaction experience and increasing the diversity of interaction methods between virtual objects.
  • special effect text elements into designated props and providing them to virtual objects, the efficiency of transmitting information displayed on the interface can be improved.
  • by converting the interaction results into designated props it is helpful to stimulate interaction between virtual objects, thereby improving the interactivity between virtual objects. This also helps to shorten the time of interactive activities (such as game play), thereby reducing the cost of gaming.
  • Figure 23 shows a structural block diagram of a terminal device 2300 provided by an exemplary embodiment of the present application.
  • the terminal device 2300 can be: a smart phone, a tablet computer, an MP3 player, an MP4 player, a laptop computer or a desktop computer.
  • the terminal device 2300 may also be called a user device, a portable terminal, a laptop terminal, a desktop terminal, and other names.
  • the terminal device 2300 includes: a processor 2301 and a memory 2302.
  • the processor 2301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 2301 can adopt at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), and PLA (Programmable Logic Array, programmable logic array).
  • DSP Digital Signal Processing, digital signal processing
  • FPGA Field-Programmable Gate Array, field programmable gate array
  • PLA Programmable Logic Array, programmable logic array
  • the processor 2301 can also include a main processor and a co-processor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the co-processor is A low-power processor used to process data in standby mode.
  • the processor 2301 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 2301 may also include an AI (Artificial Intelligence, artificial intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 2302 may include one or more computer-readable storage media, which may be non-transitory. Memory 2302 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 2302 is used to store a computer program, and the computer program is used to be executed by the processor 2301 to implement the interaction of virtual objects provided by the method embodiments in this application. method.
  • the terminal device 2300 also includes other components. Those skilled in the art can understand that the structure shown in Figure 23 does not constitute a limitation on the terminal device 2300, and may include more or fewer components than shown in the figure. , or combine certain components, or use different component arrangements.
  • the program can be stored in a computer-readable storage medium.
  • the computer-readable storage medium It may be a computer-readable storage medium included in the memory in the above embodiment; it may also be a computer-readable storage medium that exists independently and is not assembled into the terminal device.
  • a computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by the processor to implement the interactive method of the virtual object in any of the above embodiments. Law.
  • the computer-readable storage medium may include: Read Only Memory (ROM, Read Only Memory), Random Access Memory (RAM, Random Access Memory), Solid State Drives (SSD, Solid State Drives) or optical disks, etc.
  • random access memory can include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
  • ReRAM resistive random access memory
  • DRAM Dynamic Random Access Memory
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the terminal device performs the interaction of the virtual object described in any of the above embodiments. method.

Abstract

一种虚拟对象的互动方法、装置、设备、存储介质及程序产品,涉及虚拟环境技术领域。该方法包括:显示处于虚拟场景中的第一虚拟对象和第二虚拟对象(510);响应于互动操作,控制第一虚拟对象与第二虚拟对象在虚拟场景中进行互动活动(520);在虚拟场景中显示特效文本元素,该特效文本元素与第一虚拟对象和第二虚拟对象之间的互动结果相对应(530);显示特效文本元素的转换掉落动画,该转换掉落动画是指特效文本元素转化为指定道具,并掉落至虚拟场景中的动画(540)。本申请实施例通过采用基于互动结果显示特效文本元素,并显示特效文本元素转化为指定道具的动画方式,提高了虚拟对象之间的互动性,同时也提高了用户的互动体验。

Description

虚拟对象的互动方法、装置、设备、存储介质及程序产品
本申请要求于2022年05月31日提交的申请号为202210611101.7、发明名称为“虚拟对象的互动方法、装置、设备、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及虚拟环境技术领域,特别涉及一种虚拟对象的互动方法、装置、设备、存储介质及程序产品。
背景技术
随着计算机技术的快速发展和终端设备的多样化,电子游戏的应用逐渐广泛,其中,格斗类游戏是一种较为流行的游戏,其可通过终端设备显示虚拟场景,用户则可以控制虚拟对象在虚拟场景中与其他虚拟对象进行虚拟对局,来获取对局胜利。
在相关技术中,虚拟场景对应的虚拟环境画面中会显示双方玩家所控制的虚拟对象的资料,以及虚拟对象的属性值,在玩家控制虚拟对象与其他玩家控制的敌方虚拟对象进行游戏对局的过程中,若玩家控制的虚拟对象对敌方虚拟对象进行攻击并命中后,敌方虚拟对象的属性值则将显示减少,以用于表示敌方虚拟对象命中本次攻击,并受到属性值减少影响。
然而相关技术中,仅通过判断是否命中攻击操作,来判断敌方虚拟对象属性值是否显示减少,这种游戏对局中的互动显示方式较为单一。
发明内容
本申请实施例提供了一种虚拟对象的互动方法、装置、设备、存储介质及程序产品,用于提高互动显示方式的多样性,以及虚拟对象之间的交互性。所述技术方案如下:
一方面,提供了一种虚拟对象的交互方法,所述方法包括:
显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动;
在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应;
显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
另一方面,提供了一种虚拟对象的控制装置,所述装置包括:
显示模块,用于显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
接收模块,用于响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动;
所述显示模块,还用于在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应;
所述显示模块,还用于显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
另一方面,提供了一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述虚拟对象的互动方法。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述虚拟对象的互动方法。
另一方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。终端设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该终端设备执行上述虚拟对象的互动方法。
本申请的提供的技术方案至少包括以下有益效果:
通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
附图说明
图1是相关技术提供的虚拟对象的互动方法的示意图;
图2是本申请一个示例性实施例提供的虚拟对象的互动方法的示意图;
图3是本申请一个示例性实施例提供的电子设备的结构框图;
图4是本申请一个示例性实施例提供的方案实施环境的示意图;
图5是本申请一个示例性实施例提供的虚拟对象的互动方法的流程图;
图6是本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图;
图7是本申请一个示例性实施例提供的特效文本元素内容显示方法的示意图;
图8是本申请另一个示例性实施例提供的特效文本元素内容显示方法的示意图;
图9是本申请另一个示例性实施例提供的指定道具生成过程的示意图;
图10是本申请另一个示例性实施例提供的指定道具生成过程的示意图;
图11是本申请一个示例性实施例提供的指定道具拾取过程的示意图;
图12是本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图;
图13是本申请一个示例性实施例提供的增益效果显示方式的示意图;
图14是本申请一个示例性实施例提供的属性增值动画的示意图;
图15是本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图;
图16是本申请一个示例性实施例提供的第一移动动画的示意图;
图17是本申请另一个示例性实施例提供的增益选择界面的示意图;
图18是本申请一个示例性实施例提供的道具整合过程的示意图;
图19是本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图;
图20是本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图;
图21是本申请一个示例性实施例提供的虚拟对象的互动装置的结构框图;
图22是本申请另一个示例性实施例提供的虚拟对象的互动装置的结构框图;
图23是本申请一个示例性实施例提供的终端设备的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
请参考图1,图1是相关技术提供的虚拟对象的互动方法的示意图。如图1所示,在相关技术中,以虚拟场景实现为对局场景100为例,对局场景100中包括玩家控制的第一虚拟对象110,以及其他玩家控制的第二虚拟对象120,第一虚拟对象110和第二虚拟对象120进行虚拟对局。第一虚拟对象110使用攻击技能连续命中第二虚拟对象120后,对局场景100中显示攻击技能连续命中对应的连招标签130,其中,连招标签130实现为描述第一虚拟对象110当前使用攻击技能连续命中第二虚拟对象120的连续命中次数(如连续命中次数为5次),以用于展示第一虚拟对象110和第二虚拟对象120之间的对局结果。
然而在上述相关技术中,在两个虚拟对象进行游戏对局过程中,仅针对虚拟对象的连续命中次数进行特别显示的方式,仅能够让玩家了解到使用攻击技能进行攻击后对应的当前攻击结果,其并不能实质性地使玩家对于命中效果有更好的感知,使得高端玩家的游戏成就感 较低,且虚拟对象之间的当前互动方式仅单一地以特效形式进行显示,这导致玩家之间的互动性较低。
请参考图2,其示出了本申请一个示例性实施例提供的虚拟对象的互动方法的示意图,在本申请实施例提供的虚拟对象的互动方法中,虚拟场景200中包括第一虚拟对象210和第二虚拟对象220,响应于玩家的互动操作,显示第一虚拟对象210和第二虚拟对象220之间进行指定互动活动的过程,其中,指定互动活动可以实现为第一虚拟对象210和第二虚拟对象220之间使用技能进行游戏对局。
第一虚拟对象210和第二虚拟对象220之间的互动结果可实现为:在第一虚拟对象210通过使用技能对第二虚拟对象220进行攻击,并命中第二虚拟对象220的情况下,在第二虚拟对象220对应的指定位置处显示特效文本元素230,当前特效文本元素230实现为“单押”,用于表示第一虚拟对象210的攻击第一次命中第二虚拟对象220。
然后,虚拟场景200中还可以显示特效文本元素230的转换掉落动画,该转换掉落动画为:“单押”字样转化成指定道具240掉落至虚拟场景200中,此时,玩家可以通过拾取操作,控制第一虚拟对象210在虚拟场景200中对指定道具240进行拾取。
综上所述,相比于相关技术,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在一些实施例中,本申请实施例提供的技术方案可以通过终端设备单独实现,或者,本申请实施例提供的技术方案可通过服务器单独实现,或者本申请实施例提供的技术方案可通过终端设备和服务器共同实现,本申请实施例对此不加以限定。
由于通过终端设备或者服务器单独实现的方式相同,本申请实施例以终端设备单独实现为例,终端设备运行有支持虚拟环境的目标应用程序,该目标应用程序可以是单机版的应用程序,比如单机版的3D游戏程序,也可以是联机应用程序、联网应用程序等。
本申请实施例中,以终端设备中安装的目标应用程序为单机版的应用程序为例,则目标应用程序在终端设备中运行时,终端设备显示虚拟场景,虚拟场景中包含第一虚拟对象和第二虚拟对象,在根据互动操作使第一虚拟对象和第二虚拟对象进行指定互动活动的过程中,目标应用程序的客户端根据第一虚拟对象和第二虚拟对象之间的互动结果,显示特效文本元素,并显示特效文本元素转换为指定道具掉落至虚拟场景中的转换掉落动画,用户可以在终端上通过拾取操作,控制第一虚拟对象在虚拟场景中拾取指定道具。
可选的,终端设备可以是台式计算机、膝上型便携计算机、手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层4)播放器等电子设备。
示意性地,图3示出了本申请一个示例性实施例提供的电子设备的结构框图。该电子设备300包括:操作系统320和应用程序322。操作系统320是为应用程序322提供对计算机硬件的安全访问的基础软件。应用程序322是支持虚拟环境的应用程序。可选地,应用程序322是支持三维虚拟环境的应用程序。该应用程序322可以是虚拟现实应用程序、三维地图程序、自走棋游戏、益智类游戏、格斗类游戏、第三人称射击游戏(Third-Person Shooting Game,TPS)、第一人称射击游戏(First-Person Shooting Game,FPS)、多人在线战术竞技游戏 (Multiplayer Online Battle Arena Games,MOBA)、多人枪战类生存游戏中的任意一种。该应用程序322可以是单机版的应用程序,比如单机版的三维游戏程序,也可以是网络联机版的应用程序,本申请实施例对此不加以限定。
可选地,本申请实施例提供的技术方案可通过终端设备和服务器共同实现。示意性的,请参考图4,其示出了一个本申请实施例的方案实施环境的示意图。如图4所示,该实施环境中包括终端设备410、服务器420和通信网络430,其中,终端设备410和服务器420通过通信网络430进行连接。
终端设备410运行有支持虚拟场景的目标应用程序411。以格斗类游戏为例,如图4所示,当前目标应用程序实现为网络联机版的应用程序时,终端设备410当前显示目标应用程序411对应的虚拟场景4110,虚拟场景4110中包括第一虚拟对象4111,以及与第一虚拟对象4111进行指定互动活动的第二虚拟对象4112,响应于针对第一虚拟对象4111和第二虚拟对象4112的互动操作,终端设备410显示第一虚拟对象4111和第二虚拟对象4112的互动过程。终端设备410根据第一虚拟对象4111和第二虚拟对象4112的互动结果,生成互动结果触发指令,并向服务器420进行发送。
服务器420在接收到来自终端设备410的互动结果触发指令后,根据互动结果触发指令,确定互动结果对应的特效文本元素4121的文本内容,并将特效文本元素4121对应的元素渲染数据反馈至终端设备410,其中,元素渲染数据中包括特效文本元素4121的渲染子数据以及特效文本元素4121对应的掉落转换动画对应的动画子数据。
终端设备410接收到元素渲染数据后,根据特效文本元素4121的渲染子数据显示对应的特效文本元素4121,并根据动画子数据显示特效文本元素4121对应的转换掉落动画,其中,转换掉落动画实现为特效文本元素4121转换为指定道具掉落至虚拟场景4110中。
响应于第一虚拟对象4111针对指定道具的拾取操作,终端设备410显示第一虚拟对象4111拾取指定道具4122的动画过程。
服务器420可以用于为终端设备410中的目标应用程序(如游戏应用程序)的客户端提供后台服务。例如,服务器420可以是上述目标应用程序(如游戏应用程序)的后台服务器。值得注意的是,上述服务器420可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。
在一些实施例中,上述服务器420还可以实现为区块链系统中的节点。
需要说明的是,本申请实施例在收集用户的相关数据之前以及在收集用户的相关数据的过程中,都可以显示提示界面、弹窗或输出语音提示信息,该提示界面、弹窗或语音提示信息用于提示用户当前正在搜集其相关数据,使得本申请仅仅在获取到用户对该提示界面或者弹窗发出的确认操作后,才开始执行获取用户相关数据的相关步骤,否则(即未获取到用户对该提示界面或者弹窗发出的确认操作时),结束获取用户相关数据的相关步骤,即不获取用户的相关数据。换句话说,本申请所采集的所有用户数据,处理严格根据相关国家法律法规的要求,获取个人信息主体的知情同意或单独同意都是在用户同意并授权的情况下进行采集的,并在法律法规及个人信息主体的授权范围内,开展后续数据使用及处理行为且相关用户数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的虚拟场景、互动操作、拾取操作等都是在充分授权的情况下获取的。
请参考图5,其示出了本申请一个实施例提供的虚拟对象的互动方法流程图,在本申请实施例中,以该方法应用于如图4所示的终端设备410中为例进行说明,该方法包括如下步骤:
步骤510,显示处于虚拟场景中的第一虚拟对象和第二虚拟对象。
上述虚拟场景是指应用程序(如游戏应用程序)的客户端在终端设备上运行时显示(或提供)的场景,该虚拟场景是指营造出的供虚拟对象进行活动(如游戏对局)的场景,如可以是虚拟房屋、虚拟岛屿、虚拟天空、虚拟陆地等。该虚拟场景可以是对真实世界的仿真场景,也可以是半仿真半虚构的场景,还可以是纯虚构的场景,本申请实施例对此不加以限定。
虚拟对象可以是指用户账号在应用程序(如游戏应用程序)中控制的虚拟对象。以游戏应用程序为例,该虚拟对象可以是指用户账号在游戏应用程序中控制的虚拟角色。例如,上述第一虚拟对象可以是指客户端中当前登录的用户账号所控制的虚拟角色,第二虚拟对象可以由客户端控制,也可以由其他用户帐号控制,本申请实施例对此不加以限定。
示意性的,客户端在用户界面中显示虚拟场景,该虚拟场景中包括第一虚拟对象,第一虚拟对象可在虚拟场景中进行虚拟活动,该虚拟活动可以包括行走、跑步、跳跃、攀爬、释放技能、拾取道具、投掷道具等活动中的至少一种。
可选地,虚拟场景中还可以包括第二虚拟对象,该第二虚拟对象与第一虚拟对象之间存在敌对关系,或者该第二虚拟对象与第一虚拟对象之间存在队友关系,或者该第二虚拟对象与第一虚拟对象之间不存在任何关系,本申请实施例对此不加以限定。
可选地,第一虚拟对象或第二虚拟对象可实现为虚拟人物、虚拟物体、虚拟动物、虚拟建筑等,本申请实施例对此不加以限定。
步骤520,响应于互动操作,控制第一虚拟对象与第二虚拟对象在虚拟场景中进行互动活动。
互动操作可以是指使得虚拟对象之间进行交互的操作,该操作可由用户通过终端设备实现。在本申请实施例中,该互动操作可以是指当前终端设备的使用者针对第一虚拟对象的互动操作。客户端在接收到互动操作之后,可根据互动操作控制第一虚拟对象与第二虚拟对象在虚拟场景中进行互动活动。
可选地,客户端根据用户所触发的互动操作指令,来获取互动操作,如用户可通过触摸显示屏来生成针对虚拟对象的互动操作指令,用户也可以通过操作控制设备(如键盘、鼠标、游戏手柄等)来生成针对虚拟对象的互动操作指令,本申请实施例对此不作限定。示例性地,在具有第一虚拟对象和第二虚拟对象的情况下,上述互动操作指令可以包括第一用户针对第一虚拟对象所触发的互动操作指令和第二用户针对第二虚拟对象所触发的互动操作指令,以实现第一虚拟对象和第二虚拟对象之间的互动。
其中,互动活动可以是指虚拟对象之间需要产生交互的活动。示意性的,该互动活动可以实现为第一虚拟对象和第二虚拟对象之间进行虚拟对局(如游戏对局),或者,第一虚拟对象和第二虚拟对象共同完成指定任务等,本申请实施例对此不加以限定。虚拟对局可以是指虚拟对象之间进行竞争的对局。
可选地,在互动活动实现为第一虚拟对象和第二虚拟对象之间进行虚拟对局的情况下,互动操作可实现为第一虚拟对象向第二虚拟对象释放技能,或者使用虚拟道具向第二虚拟对象进行攻击。
在互动活动实现为第一虚拟对象和第二虚拟对象之间共同完成指定任务的情况下,互动操作可实现为第一虚拟对象向第二虚拟对象发送任务邀请,使第一虚拟对象和第二虚拟对象共同进行指定任务。
在一些实施例中,互动活动的活动内容是提前预设的;或者,用户可以自由设定互动活动的具体活动内容,本申请实施例对此不加以限定。
步骤530,在虚拟场景中显示特效文本元素,该特效文本元素与第一虚拟对象和第二虚拟对象之间的互动结果相对应。
互动结果是指上述互动活动的结果,例如在虚拟对局中,第一虚拟对象连续多次击中了第二虚拟对象。可选地,该互动结果可以实时客户端实时获取的,也即互动结果对应的特效文本元素也是实时更新显示的。
特效文本元素是指在文字元素上施加特殊效果得到的视图元素,诸如文字填充特效(如纯色填充、渐变填充等)、描边文字特效(如文字叠加、霓虹灯特效等)、渐隐文字特效、动态文字特效等,也即可以根据被施加的特殊效果确定特效文本元素的显示方式。上述特效文本元素用于描述第一虚拟对象和第二虚拟对象之间的互动结果。示意性的,互动结果可用于确定特效文本元素对应的显示内容、显示方式、显示数量、显示位置、显示时长等至少一种。
其中,显示内容是指特效文本元素的文本内容,如:可以根据当前互动结果的文本内容,确定特效文本元素的文本内容;显示方式是指特效文本元素对应的元素显示方式,如:高亮显示、闪烁显示等;显示数量是指特效文本元素的元素个数,如:根据互动结果单次显示一个特效文本元素;显示位置是指特效文本元素在虚拟场景中进行显示时的位置,如:第一虚拟对象对应的指定位置(如头顶上方)、第二虚拟对象对应的指定位置(如头顶上方)等;显示时长是指特效文本元素的显示时间,如:单个特效文本元素在虚拟场景中的显示时长为3秒。
可选地,第一虚拟对象和第二虚拟对象之间的互动结果对应单个固定的特效文本元素;或者,该互动结果可对应多种不同类型的特效文本元素,本申请实施例对此不加以限定。
可选地,特效文本元素可实现为固定显示,也即,每次显示相同的特效文本元素;或者,特效文本元素的显示与互动结果对应,也即,不同的互动结果对应不同的特效文本元素。
步骤540,显示特效文本元素的转换掉落动画,该转换掉落动画是指特效文本元素转化为指定道具,并掉落至虚拟场景中的动画。
可选地,客户端基于特效文本元素生成转换掉落动画,并在用户界面中展示该转换掉落动画。在一些实施例中,当特效文本元素发生变化时,即可将特效文本元素转换为转换掉落动画,以开始显示。
示意性的,转换掉落动画可用于描述特效文本元素和指定道具之间的转化过程,以及指定道具掉落值虚拟场景中的过程,也即,当前虚拟场景中,指定道具的生成方式依托于特效文本元素。其中,指定道具可以是指任一虚拟道具,诸如攻击虚拟道具、防御虚拟道具、能量值获取道具、技能虚拟道具、增益虚拟道具(如回复生命值)等,本申请实施例对此不加以限定。
在一个示例中,转换掉落动画可以包括如下几种动画显示方式中的至少一种:
1.在特效文本元素的文本内容显示完整的情况下,客户端开始显示将特效文本元素转换为指定道具并掉落至虚拟场景中的动画;
2.预设一个时长阈值,在特效文本元素的显示时长达到时长阈值的情况下,客户端开始显示将特效文本元素转换为指定道具并掉落至虚拟场景中的动画;
3.在接收到针对特效文本元素的转换触发操作的情况下,客户端才显示特效文本元素的转换掉落动画。转换触发操作用于触发显示特效文本元素的转换掉落动画,如:在接收到针对特效文本元素的转换触发操作的情况下,客户端开始基于特效文本元素生成对应的转换掉落动画,并显示该转换掉落动画。
可选地,特效文本元素转换为指定道具可以是指将特效文本元素直接替换为指定道具,也可以是指取消显示特效文本元素,在设定位置处增加显示指定道具,如虚拟场景顶部、虚拟场景中间位置等,还可以取消显示特效文本元素,显示指定道具进入虚拟场景的动画,如虚拟场景破裂,指定道具从裂痕中进入虚拟场景,本申请实施例对此不作限定。
值得注意的是,上述关于转换掉落动画的动画显示方式仅为示意性的举例,本申请实施例对此不加以限定。
在一个示例中,指定道具的转换方式可以包括如下几种表现形式中的至少一种:
1.客户端根据特效文本元素确定指定数量的指定道具,也即,不同的特效文本元素对应转化成不同数量的指定道具;
2.客户端根据特效文本元素确定指定类型的指定道具,也即,不同的特效文本元素对应转 化成不同类型的指定道具;
3.客户端根据特效文本元素确定指定道具的转化效果,也即,不同的特效文本元素对应的转化形式不同,如:特效文本元素A以逐个字体依次转化为对应的指定道具,并掉落至虚拟场景中,以作为转换掉落动画。
值得注意的是,上述关于指定道具的道具表现形式仅为示意性的举例,本申请实施例对此不加以限定。
在一些实施例中,特效文本元素转化成指定道具的过程可以实现为:特效文本元素依次对应转换为指定数量的指定道具,指定道具逐个显示,并依次掉落至虚拟场景中,也即,指定道具的转化过程是逐个转化并显示的;或者,客户端将特效文本元素同时转化成预设数量的指定道具,并使预设数量的指定道具同时掉落至虚拟场景中,也即,特效文本元素转化成指定道具的转化过程是一次性完成的,预设数量的指定道具是同时显示的,本申请实施例对此不加以限定。
在一个示例中,指定道具掉落至虚拟场景中的掉落方式可以包括如下方式中的至少一种:
1.指定道具生成后以自由落体的方式掉落至虚拟场景中;
2.指定道具生成后以特效文本元素生成转换道具的位置为起点,向四周呈放射状掉落至虚拟场景中;
3.指定道具预设有固定掉落位置,指定道具生成后朝向固定掉落位置的方向掉落,最终掉落在固定掉落位置。
值得注意的是,上述关于指定道具的掉落方式仅为示意性的举例,本申请实施例对此不加以限定。
可选地,在转换掉落动画实现为生成多个指定道具的情况下,多个指定道具掉落至虚拟场景中的同一个固定位置,或者,多个指定道具掉落至虚拟场景中的不同位置,本申请实施例对此不加以限定。
在一个示例中,在指定道具掉落至虚拟场景中之后,用户可控制虚拟对象对指定道具进行拾取。示意性的,客户端响应于针对指定道具的拾取操作,控制第一虚拟对象在虚拟场景中对指定道具进行拾取。
可选地,在第一虚拟对象主动完成对第二虚拟对象的交互的情况下(如第一虚拟对象击中第二虚拟对象),该指定道具对第一虚拟对象产生指定增益效果,第一虚拟对象通过拾取指定道具可获取对应的指定增益效果。如此可以激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性。指定增益效果可以根据实际使用需求进行设置与调整,诸如生命值回复、能量值增加、攻击伤害增加等,本申请实施例对此不加以限定。
示意性的,指定道具可实现为可使用的道具,第一虚拟对象拾取指定道具之后,可使用该指定道具与第二虚拟对象进行互动活动。可选地,指定道具也可实现为特效道具,第一虚拟对象拾取指定道具之后,客户端显示指定道具对应的特效效果。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在一些实施例中,在互动活动的互动结果实现为对应多种不同的特效文本元素的情况下,互动活动可以包括多个活动阶段,每个活动阶段对应存在阶段性互动结果,单个阶段性互动 结果对应单个特效文本元素。示意性的,请参考图6,其示出了本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图,也即,上述实施例中步骤540还包括步骤541,步骤530中还包括步骤531,如图6所示,该方法包括如下步骤:
步骤510,显示处于虚拟场景中的第一虚拟对象和第二虚拟对象。
其中,第一虚拟对象是当前终端设备主控的虚拟对象。可选地,第一虚拟对象和第二虚拟对象之间存在敌对关系;或者,第一虚拟对象和第二虚拟对象之间存在队友关系。
可选地,第一虚拟对象和第二虚拟对象属于同一类型的虚拟对象,如:第一虚拟对象和第二虚拟对象均为虚拟人物;或者,第一虚拟对象和第二虚拟对象属于不同类型的虚拟对象,如:第一虚拟对象实现为虚拟人物,第二虚拟对象实现为虚拟野兽或者虚拟物体,本申请实施例对此不加以限定。
步骤520,响应于互动操作,控制第一虚拟对象与第二虚拟对象在虚拟场景中进行互动活动。
可选地,互动操作的操作方式可以包括如下操作方式中的至少一种:
1.互动操作实现为通过当前终端设备控制第一虚拟对象在虚拟场景中进行活动:
其中,在互动活动实现为第一虚拟对象和第二虚拟对象之间进行虚拟对局的情况下,互动操作可实现为响应于针对第一虚拟对象的攻击触发操作,客户端控制第一虚拟对象向第二虚拟对象施展格斗或者释放技能等方式进行攻击操作;或者,在互动活动实现为第一虚拟对象和第二虚拟对象共同完成指定任务的情况下,互动操作可实现为指示第一虚拟对象向第二虚拟对象发送任务邀请。
2.用户界面中显示有互动活动列表,互动操作实现为在互动活动列表中选中指定互动活动,客户端显示第一虚拟对象和第二虚拟对象执行该指定互动活动的动画。
值得注意的是,当前互动操作的操作方式仅为示意性的举例,本申请实施例对此不加以限定。
步骤531,在虚拟场景中与第二虚拟对象对应的指定位置处,显示与阶段性互动结果对应的特效文本元素。
其中,阶段性互动结果对应的特效文本元素的文本内容与阶段性互动结果相对应,该阶段性互动结果是指活动阶段下的互动结果,如当前活动阶段下的互动结果。可选地,还可以在虚拟场景中与第一虚拟对象对应的指定位置处,显示与阶段性互动结果对应的特效文本元素。如此可以使得用户的目光始终集中在虚拟对象上,更便于提高用户对互动活动的专注度,从而提高用户的互动体验。
示意性的,互动活动中包括多个活动阶段,每个活动阶段的互动结果作为阶段性互动结果,也即,阶段性互动结果用于表示第一虚拟对象和第二虚拟对象在进行互动活动的过程中,当前活动阶段对应的互动结果,如:在互动活动实现为第一虚拟对象和第二虚拟对象在当前回合内进行虚拟对局的情况下,第一虚拟对象每次攻击第二虚拟对象时的过程对应为一个活动阶段,因此当前回合内第二虚拟对象的单次命中结果即为阶段性互动结果。
在一些示例中,不同的阶段性互动结果对应不同的特效文本元素的文本内容。
示意性的,在第m个阶段性互动结果对应的特效文本元素的显示过程中,产生第m+1个阶段性互动结果的情况下,第m个阶段性互动结果对应的特效文本元素取消显示,并显示第m+1个阶段性互动结果对应的特效文本元素,两个特效文本元素先后显示的位置相同。其中,第m+1个阶段性互动结果对应的特效文本元素的显示方式包括叠加显示,替换显示等显示方式中的至少一种。
可选地,在第m个阶段性互动结果对应的特效文本元素的显示过程中,产生第m+1个阶段性互动结果的情况下,第m个阶段性互动结果对应的特效文本元素不取消显示,并显示第m+1个阶段性互动结果对应的特效文本元素,两个特效文本元素各自显示的位置不同。其中,m为正整数。
下面,以两种不同的互动活动为例对特效文本元素的显示方式进行说明。
第一种,互动活动实现为虚拟对局。
本申请实施例中,在第一虚拟对象和第二虚拟对象进行虚拟对局的过程中,客户端通过接收互动操作,并根据互动操作来控制第一虚拟对象向第二虚拟对象释放技能进行攻击,在当前回合中,若第一虚拟对象释放的技能命中了第二虚拟对象,客户端则根据命中结果,显示与命中结果对应的特效文本元素。
示意性的,请参考图7,其示出了本申请一个示例性实施例提供的特效文本元素内容显示方法的示意图,如图7所示,用户界面显示虚拟场景700,在当前回合内,在第一虚拟对象710向第二虚拟对象720释放技能的过程中,若第二虚拟对象720第一次被命中,则在第一虚拟对象710的上方显示特效文本元素730“单押”,以用于表示当前回合内实现了第一虚拟对象710的第一次技能命中。
在一种可实现的情况下,若当前回合内,第一虚拟对象710第一次命中第二虚拟对象720后,再次向第二虚拟对象720释放技能,且该技能也命中了第二虚拟对象720,也即,当前回合内第一虚拟对象710通过释放技能连续两次命中了第二虚拟对象720,则当前的特效文本元素730“单押”替换显示为特效文本元素740“双押”。本申请实施例中,不同的阶段性互动结果对应的特效文本元素的文本内容不同,如此可以丰富交互的显示方式,使得用户具有攻击成功的成就感,从而提高了用户交互体验。
在另一种可实现的情况下,当前回合内,第一虚拟对象710第一次命中第二虚拟对象720后,又向第二虚拟对象720先后释放了两次技能。若在释放的两次技能中,第二次技能才再次命中第二虚拟对象720,也即,当前回合中,第一虚拟对象710两次技能命中第二虚拟对象720(但并不是连续命中),则在第二次技能释放完毕并命中第二虚拟对象720时,当前特效文本元素730“单押”替换显示为特效文本元素740“双押”。
值得注意的是,上述两种可实现的情况为并列的两种情况,可选择任意一种进行特效文本元素的显示,对此不加以限定。
第二种,互动活动实现为共同完成指定任务。
本本申请实施例中,指定任务包括多个阶段性任务,第一虚拟对象和第二虚拟对象实现为队友关系,在第一虚拟对象和第二虚拟对象共同完成指定任务的过程中,在完成第一个阶段性任务的情况下,客户端显示第一个阶段性任务对应的特效文本元素,在完成第二个阶段性任务的情况下,客户端显示第二个阶段性任务对应的特效文本元素。也即,特效文本元素用于表示当前阶段性任务的完成情况。
以虚拟场景为游戏闯关场景为例,指定任务为第一虚拟对象和第二虚拟对象共同击败多种不同类型的虚拟怪物。示意性的,请参考图8,其示出了本申请一个示例性实施例提供的特效文本元素内容显示方法的示意图,如图8所示,当前虚拟场景800中包括第一虚拟对象810和第二虚拟对象820,指定任务实现为第一虚拟对象810和第二虚拟对象820共同攻击第一对象830和第二对象840,在第一虚拟对象810和第二虚拟对象820中任意一个击败第一对象830的情况下,在第一虚拟对象810或第二虚拟对象820的上方显示特效文本元素850“怪物1被成功击败!”,以用于表明第一虚拟对象810和第二虚拟对象820击败第一对象830的阶段性任务已完成。在第一虚拟对象810和第二虚拟对象820中任意一个击败第二对象840的情况下,在第一虚拟对象810或第二虚拟对象820的上方显示特效文本元素860“怪物2被成功击败!”,以用于表明第一虚拟对象810和第二虚拟对象820击败第二对象840的阶段性任务已完成。其中,特效文本元素的文本内容与击败的对象对应,如特效文本元素的文本内容基于击败的对象的名称来构建。
步骤541,基于指定道具的指定数量显示特效文本元素的转换掉落动画。
其中,转换掉落动画是指特效文本元素转化为指定数量的指定道具,并掉落至虚拟场景中的动画。该指定数量与特效文本元素的文本内容相对应。指定数量是指特效文本元素转换 为指定道具时,指定道具对应的数量。
在一些实施例中,指定道具的指定数量与特效文本元素的文本内容相对应,而特效文本元素的文本内容与阶段性互动结果对应,也即,指定数量与获取阶段性互动结果相对应。其中,不同的阶段性互动结果对应不同文本内容的特效文本元素,因而每个特效文本元素转换的指定道具对应的指定数量也不同。如此可以丰富转换掉落动画的显示方式,激发用户获取不同转换掉落动画的兴趣,从而有利于提高了用户粘度。
示意性的,在游戏对局中,指定数量与连续命中次数呈正相关关系。例如,请参考图9,其示出了本申请一个示例性实施例提供的指定道具生成过程的示意图,如图9所示,用户界面显示虚拟场景900,当前回合中,第一虚拟对象910对第二虚拟对象920进行攻击的过程中,在第二虚拟对象920第一次被命中的情况下,客户端显示特效文本元素930“单押”,其中,第一次命中结果作为当前回合的阶段性互动结果,第一次命中结果将实现为使特效文本元素930“单押”转化为1个指定道具940,也即客户端显示特效文本元素930“单押”转化为1个指定道具940,并掉落地至虚拟场景中的转换掉落动画。
请参考图10,其示出了本申请另一个示例性实施例提供的指定道具生成过程的示意图,如图10所示,用户界面显示虚拟场景1000,当前回合中,第一虚拟对象1010对第二虚拟对象1020进行攻击的过程中,在第二虚拟对象1020连续两次被命中的情况下(第一次命中过程未在图10中进行显示,第一次命中过程可参考图9),客户端显示特效文本元素1030“双押”,其中,连续两次命中结果作为当前回合的阶段性互动结果,第二次命中结果将实现为使特效文本元素1030“双押”转换为2个指定道具1040,也即客户端显示特效文本元素1030“双押”转换为2个指定道具1040,并掉落地至虚拟场景中的转换掉落动画。此外,图10中还包括第一虚拟对象1010第一次命中第二虚拟对象1020后,所显示的特效文本元素“单押”转化并掉落的1个指定道具1050。
在本申请实施例中,每显示一个特效文本元素,都对应显示该特效文本元素对应的转换掉落动画,以及在虚拟环境中掉落指定数量的指定道具。其中,在客户端根据第k次阶段性互动结果显示特效文本元素的情况下,第k-1次阶段性互动结果对应的指定道具保持显示或者取消显示;或者,在客户端根据第k次阶段性互动结果显示特效文本元素的情况下,再将第k-1次阶段性互动结果对应的特效文本元素转化为指定道具,本申请实施例对此不作限定,其中,k为正整数。
可选地,特效文本元素的转换掉落动画可以是指将二维的特效文本元素转化为二维的指定道具并掉落的动画;或者,特效文本元素的转换掉落动画可以是指将二维的特效文本元素转化为三维的指定道具并掉落的动画。
在一个示例中,当转换掉落动画实现为将二维的特效文本元素转化为指定道具时,其可以包括如下内容:客户端显示特效文本元素的收缩消失动画,该收缩消失动画是指特效文本元素在与第二虚拟对象对应的指定位置处缩小后取消显示的动画;获取指定位置在虚拟场景的世界坐标系中的第一坐标,以作为指定道具的起始坐标;获取世界坐标系中与第一坐标对应的第二坐标,以作为指定道具的落地坐标;基于第一坐标和第二坐标获取指定道具的掉落路径数据;根据掉落路径数据显示指定道具掉落的转换掉落动画。
在本申请实施例中,在显示特效文本元素的转换掉落动画的过程中,首先显示特效文本元素的收缩消失动画,再显示转换掉落动画。当特效文本元素开始缩小时,即开始显示收缩消失动画。第二坐标与第一坐标不同,第二坐标可以是指位于虚拟场景中的虚拟地面上的某个坐标,如靠近第一虚拟对象或第二虚拟对象的坐标。
可选地,第一坐标可实现为一个二维坐标;或者,第一坐标可实现为一个三维坐标,本申请实施例对此不做限定。第一坐标可用于实现为指定道具掉落的起始坐标,也即,在第一坐标处,特效文本元素转化为指定道具并开始掉落,第二坐标实现为指定道具掉落的终止位置。
示意性的,客户端在获取虚拟场景对应的世界坐标系中与第一坐标对应的第二坐标之后,将其确定为指定道具最终掉落在虚拟场景中的落地位置。客户端根据第一坐标和第二坐标得到指定道具对应的掉落路径数据,以用于描述指定道具掉落至虚拟场景中的掉落路径。
在一个示例中,在获取掉落路径数据后,客户端还可以确定指定道具对应的纹理素材集,该纹理素材集中包括多种指定道具对应的纹理素材,纹理素材用于描述通过摄像机从不同角度对指定道具进行拍摄后得到的素材图像。
客户端可以根据指定道具在掉落路径数据中对应的观察视角,从纹理素材集中获取与观察视角对应的纹理素材图像,其中,观察视角指当前终端设备对应的第一人称视角或者第三人称视角,根据不同的观察视角,可得到指定道具不同的纹理素材图像,如:观察视角为西北45度,则在纹理素材集中得到与西北45度角对应的指定道具的纹理素材图像。客户端沿掉落路径数据对应的掉落轨迹显示指定道具的纹理素材图像。也即,客户端基于指定道具的观察视角,从纹理素材集中获取与观察视角对应的纹理素材图像,并沿掉落路径数据对应的掉落轨迹显示纹理素材图像,以作为指定道具掉落的转换掉落动画。
在一个示例中,在指定道具掉落至虚拟场景中之后,用户可控制虚拟对象对指定道具进行拾取。示意性的,客户端响应于针对指定道具的拾取操作,控制第一虚拟对象在虚拟场景中对指定道具进行拾取。
可选地,拾取操作的操作方式可以包括如下方式中的至少一种:
1.拾取操作可实现为控制第一虚拟对象在虚拟场景中拾取至少一个指定道具;
2.拾取操作可实现为对虚拟场景中掉落的指定道具进行触发操作,显示第一虚拟对象自动拾取触发的指定道具,将触发操作作为拾取操作。
值得注意的是,上述关于拾取操作的操作方式仅为示意性的举例,本申请实施例对此不加以限定。
示意性的,请参考图11,其示出了本申请一个示例性实施例提供的指定道具拾取过程的示意图,如图11所示,用户界面显示虚拟场景1100,虚拟场景1100中包括转换掉落动画对应的多个指定道具1110,响应于针对指定道具1110的拾取操作,客户端控制第一虚拟对象1120对其中一个指定道具1110进行拾取。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在本申请实施例中,通过将特效文本元素转换为指定数量的指定道具,且指定数量与特效文本元素的文本内容对应的方式,可以使得用户在获取互动结果后,感知到指定道具的数量,从而有利于提高用户的成就感和体验感。
在本申请实施例中,针对互动任务中包括多个活动阶段的情况,根据每个活动阶段对应的阶段性互动结果显示对应的特效文本元素,通过采用将特效文本元素的文本内容与阶段性互动结果对应的方式,使得用户能够提高进行互动活动的参与积极性。
在本申请实施例中,通过采用将阶段性互动结果与指定道具的指定数量进行对应的方式,实现了指定数量的增长与阶段性互动结果相关的互动方式,丰富了虚拟对象之间互动方式的多样性。
在一些实施例中,在第一虚拟对象拾取指定道具后,客户端显示第一虚拟对象对应的增 益动画,也即,指定道具对第一虚拟对象产生指定增益效果。示意性的,请参考图12,其示出了本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图,如图12所示,该方法可以包括如下步骤:
步骤1210,显示处于虚拟场景中的第一虚拟对象和第二虚拟对象。
步骤1220,响应于互动操作,控制第一虚拟对象与第二虚拟对象在虚拟场景中进行互动活动。
步骤1210与上述步骤510介绍相同,步骤1220与上述步骤520介绍相同,本申请实施例未说明的内容,可以参考上述实施例,此处不再赘述。
步骤1230,在虚拟场景中显示特效文本元素,该特效文本元素与第一虚拟对象和第二虚拟对象之间的互动结果相对应。
可选地,互动活动包括多个活动阶段,第i个活动阶段对应第i个特效文本元素,i为正整数。
示意性的,互动活动中包括多个不同的活动阶段,每个活动阶段之间存在递进关系,其中,若互动活动实现为第一虚拟对象和第二虚拟对象进行单个回合的虚拟对局,则多个活动阶段对应为第一虚拟对象攻击第二虚拟对象的过程中第二虚拟对象被命中的次数,如:在当前回合中,在第二虚拟对象第一次被命中的情况下,将当前回合内双方开始进行虚拟对局到第一次被命中之间的互动过程作为第一个活动阶段,在第二虚拟对象第二次被命中的情况下,将当前回合内第一次被命中和第二次被命中之间的互动过程作为第二个活动阶段,因此,第二个活动阶段与第一个活动阶段之间呈命中次数的递进关系。
在本申请实施例中,每个活动阶段都对应一个阶段性互动结果,如:上述第一个活动阶段对应的第一阶段性互动结果为“第二虚拟对象第一次被命中”,上述第二个活动阶段对应的第二阶段性互动结果为“第二虚拟对象第二次被命中”。
在本申请实施例中,客户端可根据阶段性互动结果显示特效文本元素,其中,特效文本元素的文本内容与阶段性互动结果相对应,如:上述第一阶段性互动结果对应显示的特效文本元素实现为“单押”(即连续命中一次),上述第二阶段性互动结果对应显示的特效文本元素实现为“双押”(即连续命中二次),特效文本元素的文本内容用于描述对应的阶段性互动结果。
步骤1240,响应于特效文本元素的显示时长达到指定时长阈值,显示特效文本元素的转换掉落动画。
特效文本元素的显示时长是指该特效文本元素在虚拟环境中的显示时间长度。
可选地,指定时长阈值可以是预设的固定值;或者,用户可以自由调整指定时长阈值,本申请实施例对此不加以限定。
可选地,转换掉落动画的显示方式包括如下几种方式中的至少一种:
1.互动活动中包括多个活动阶段,每个活动阶段对应单个特效文本元素,且每个特效文本元素之间独立显示,如:在第一个活动阶段对应的第一特效文本元素的显示过程中(未达到指定时长阈值),响应于第二个活动阶段结束,客户端根据第二个活动阶段对应的阶段性互动结果,显示第二特效文本元素,此时,第一特效文本元素和第二特效文本元素均在虚拟环境中各自独立显示,且第二特效文本元素显示后并不会影响第一特效文本元素的显示时长,在第一特效文本元素的显示时长达到其对应的指定时长阈值的情况下,客户端显示第一特效文本元素的掉落转化掉落动画,第二特效文本元素对应的转换掉落动画的显示方式同上,因此,当前虚拟环境中将存在第一特效文本元素转化掉落的指定道具,以及第二特效文本元素转化掉落的指定道具;
2.互动活动中包括多个活动阶段,每个活动阶段对应单个特效文本元素,每个特效文本元素之间实现为替换显示,如:在第一个活动阶段对应的第一特效文本元素在的显示过程中(未达到指定时长阈值),响应于第二个活动阶段产生阶段性活动结果,客户端将第一特效文本元 素替换显示为第二个活动阶段对应的第二特效文本元素,并取消显示第一特效文本元素。
其中,在第二特效文本元素的显示时长达到指定时长阈值时,且没有接收到第三个活动阶段对应的阶段性互动结果的情况下,客户端显示第二特效文本元素的转换掉落动画,因此,虚拟环境中只存在第二特效文本转化并掉落的指定道具,也即,客户端响应于第i个特效文本元素的显示时长达到指定时长阈值,且在指定时长阈值范围内未接收到第i+1个活动阶段的阶段性互动结果,显示第i个特效文本元素的转换掉落动画。根据活动阶段之间的递进关系,依次更新显示对应的转换掉落动画,有利于激发用户去触发不同的活动阶段,从而提高用户粘度。
可选地,在上述第二种显示方式中,上述指定时长阈值是针对单个特效文本元素而言的,也即,当第一特效文本元素替换显示为第二特效文本元素时,第二特效文本元素当前的显示时长重新开始计算;或者,上述指定时长阈值是针对整个互动活动而言的,从第一特效文本元素开始显示进行计算,若存在第二特效文本元素替换显示第一特效文本元素的情况,且不存在第三特效文本元素替换显示第二特效文本元素的情况,则指定时长阈值指第一特效文本元素和第二特效文本元素显示时长之和,本申请实施例对此不加以限定。
值得注意的是,上述关于转换掉落动画的显示方式仅为示意性的举例,本申请实施例对此不加以限定。
可选地,每个活动阶段对应的特效文本元素转化的指定道具属于相同类型的道具;或者,每个活动阶段对应的特效文本元素转化的指定道具属于不同类型的道具,本申请实施例对此不加以限定。
步骤1250,响应于拾取操作,控制第一虚拟对象在虚拟场景中对指定道具进行拾取。
上述步骤1250与上述实施例介绍相同,此处不再赘述。
步骤1260,显示第一虚拟对象对应的增益动画,该增益动画是指第一虚拟对象拾取指定道具后,产生与指定道具对应的指定增益效果的动画。
其中,指定增益效果与第一虚拟对象所拾取的指定道具的数量相关;或者,指定增益效果与第一虚拟对象所拾取的指定道具的类型相关。
示意性的,当客户端为第一虚拟对象开始显示指定增益效果时,即为开始显示对应的增益动画。
可选地,指定增益效果可用于增加第一虚拟对象的属性效果,其中,属性效果包括第一虚拟对象的生命值、能量值、法力值、防御值、攻击能力、角色等级等中的至少一种。第一虚拟对象在拾取到指定道具后,可对指定道具进行使用,如:当第一虚拟对象拾取到指定道具后,指定道具转化为互动道具供第一虚拟对象进行使用。
在一些示例中,指定增益效果预设有效果时长阈值,当指定增益效果达到效果时长阈值后,指定增益效果消失;或者,指定增益效果实现为持续性增益效果,也即,指定增益效果不会自动消失,本申请实施例对此不加以限定。
可选地,拾取操作用于使第一虚拟对象单次拾取一个指定道具;或者,拾取操作用于使第一虚拟对象单次拾取多个指定道具。可选地,指定道具在虚拟场景中预设有显示时长阈值,当指定道具在虚拟场景中的显示时长达到显示时长阈值,指定道具将自动取消显示,使第一虚拟对象无法进行拾取;或者,指定道具的指定增益效果在虚拟场景中预设有效果阈值,当指定道具在虚拟场景中的显示时长达到效果阈值,指定道具不会取消显示,但指定道具将不再具备指定增益效果,或者指定增益效果的效果类型发生更改,本申请实施例对此不加以限定。
在一些示例中,单个指定道具对应单个指定增益效果,也即,第一虚拟对象拾取到该指定道具后,产生对应的指定增益效果;或者,单个指定道具对应多种候选增益效果,第一虚拟对象拾取到该指定道具后,通过在多个候选增益效果中选择至少一种增益效果;或者,在第一虚拟对象连续拾取到至少两个指定道具后,这两个指定道具将对第一虚拟对象产生组合 增益效果,也即,两个指定道具分别对第一虚拟对象具有各自的指定增益效果,但两个指定道具都被拾取后,将产生组合增益效果,且组合增益效果与两个指定道具各自对应的指定增益效果不同,本申请实施例对此不加以限定。
可选地,增益动画与第一虚拟对象所拾取的指定道具对应的指定增益效果有关。示意性的,在第一虚拟对象每拾取一个指定道具时,客户端显示所拾取的该指定道具对应的增益动画;或者,在第一虚拟对象连续拾取多个指定道具后,客户端再显示多个指定道具分别对应的增益动画,本申请实施例对此不加以限定。
在一个示例中,指定增益效果与指定道具的数量相关的表现形式包括如下几种形式中的至少一种:
1.若虚拟场景中包括多个指定道具,且多个指定道具对应同种类型的增益效果,则当第一虚拟对象拾取的指定道具数量越多,则拾取的指定道具产生的指定增益效果越好,如:虚拟场景中包括道具a(增益效果为生命值+10)、道具b(增益效果为生命值+5)和道具c(增益效果为生命值+15),若第一虚拟对象拾取道具a和道具b,则产生的指定增益效果为生命值+15,若第一虚拟对象拾取道具a、道具b和道具c,则产生的指定增益效果为生命值+30;
2.指定增益效果的增益效果与指定道具的数量呈对应关系,也即,若第一虚拟对象拾取到的指定道具的数量达到数量阈值,则对第一虚拟对象产生与数量阈值对应的增益效果,如:预设拾取2个指定道具可增加法力值5点,拾取15个指定道具可增加法力值15点,当虚拟场景中包括20个指定道具时,在第一虚拟对象拾取了3个指定道具的情况下,对第一虚拟对象产生增加法力值5点的指定增益效果,在第一虚拟对象拾取的指定道具的数量达到15个的情况下,对第一虚拟对象产生增加法力值15点的指定增益效果(在法力值增加5点的基础上再额外增加10点法力值);
3.指定增益效果的产生时间与拾取的指定道具的数量相关,根据不同数量的指定道具,预设各自对应的增益效果产生时间,也即,当第一虚拟对象连续拾取的指定道具数量越多,产生对应的指定增益效果的速度越快,如:当第一虚拟对象连续拾取3个指定道具,则通过0.5秒的时间增加30点生命值,当第一虚拟对象连续拾取5个指定道具,则通过0.2秒的时间增加30点生命值。
值得注意的是,上述关于指定增益效果与数量相关的表现形式仅为示意性的举例,本申请实施例对此不加以限定。
在一些示例中,指定增益效果与指定道具的类型相关的表现形式包括如下几种形式中的至少一种:
1.若虚拟场景中包括多个指定道具,且多个指定道具各自对应不同类型的增益效果,则第一虚拟对象通过拾取不同类型的指定道具,产生不同类型的指定增益效果,如:虚拟场景中包括道具A(增益效果为魔法值+10)和道具B(增益效果为防御值+5),第一虚拟对象拾取道具A后,可增加魔法值10点,或者第一虚拟对象拾取道具B后,可增加防御值5点;
2.虚拟场景中包括多个指定道具,且多个指定道具各自对应不同类型的增益效果,但预设存在至少两个指定道具之间存在合成关系,也即,当第一虚拟对象连续拾取该至少两个指定道具后,将产生该至少两个指定道具对应的合成增益效果,如:虚拟场景中包括道具1(增益效果为武力值+10)、道具2(增益效果为防御值+10)和道具3(增益效果为生命值+10),当第一虚拟对象连续拾取道具1、道具2和道具3后,则产生的指定增益效果为第一虚拟对象的角色等级上升1级,但是如果第一虚拟对象只拾取道具1,则产生的指定增益效果仅为武力值+10。
值得注意的是,上述关于指定增益效果与类型相关的表现形式仅为示意性的举例,本申请实施例对此不加以限定。
可选地,下面提供两种不同的增益动画的显示方式。
第一种,增益动画实现为在第一虚拟对象的周侧范围高亮显示指定增益效果。
在本申请实施例中,指定道具对应的指定增益效果实现为增加第一虚拟对象的属性值(如:生命值、武力值、防御值等中的至少一种),在第一虚拟对象拾取到指定道具后,在第一虚拟对象的周侧范围进行高亮显示指定增益效果,以用于表现当前第一虚拟对象通过拾取指定道具所产生的指定增益效果,这个过程作为增益动画。示意性的,请参考图13,其示出了本申请一个示例性实施例提供的增益效果显示方式的示意图,如图13(a)所示,用户界面当前显示虚拟场景1300,若第一虚拟对象1310在虚拟场景1300中拾取指定道具1320,且指定道具1320对应的指定增益效果为5秒内抵抗所有攻击,客户端则在虚拟场景1300中显示第一虚拟对象1310的增益动画,其中,增益动画实现为在第一虚拟对象1310的周侧范围高亮显示防御效果(图13(a)中以虚线表示防御效果),且持续时间为5秒。
第二种,增益动画实现为在第一虚拟对象的设定位置处显示指定增益效果的文本内容。
在本申请实施例中,在第一虚拟对象拾取到指定道具后,且指定道具对应的指定增益道具效果实现为增加第一虚拟对象的属性值,客户端则在第一虚拟对象的设定位置处显示增加的属性值对应的文本内容,以作为增益动画,如图13(b)所示,用户界面当前显示虚拟场景1300,若第一虚拟对象1310在虚拟场景1300中拾取指定道具1320,且指定道具1320对应的指定增益效果为“生命值+10”,客户端则在虚拟场景1300中显示第一虚拟对象1310的增益动画,其中,增益动画实现为在第一虚拟对象1310的躯干中央显示指定增益效果“生命值+10”对应的文本内容。
在一个示例中,虚拟场景中还包括第一虚拟对象对应的属性槽,该属性槽中包括属性值,属性值用于描述第一虚拟对象拥有属性的情况。在本申请实施例中,第一虚拟对象对应的属性槽用于表示第一虚拟对象在和第二虚拟对象进行互动活动过程中所拥有的实时属性值,如:第一虚对象的属性槽为生命值槽(生命值槽满值为100点),则上述属性值为第一虚拟对象和第二虚拟对象进行互动过程中,第一虚拟对象在当前活动阶段内对应的实时生命值(当前互动活动为虚拟对局,在对局过程中当前第一虚拟对象命中一次第二虚拟对象的普通攻击,则第一虚拟对象的实时生命值为90点,其中,命中一次普通攻击的攻击结果为生命值减少10点)。
可选地,客户端还显示第一虚拟对象对应的属性增值动画,该属性增值动画是指属性值从初始属性值随着增益动画增长至目标属性值的动画,初始属性值和目标属性值之间的属性值增长量与指定增益效果相关。属性增值动画也可作为增益动画的一部分进行显示。
示意性的,属性增值动画用于描述第一虚拟对象拾取指定道具后,产生与指定道具对应指定增益效果的属性增长量。如:第一虚拟对象的属性槽实现为生命值槽,其初始属性值为生命值50点(满格生命值为100点),当第一虚拟对象拾取到指定道具(指定增益效果为生命值+20点),则客户端在显示第一虚拟对象的增益动画的过程中,还显示第一虚拟对象的生命值槽中的生命值从50点增长至70点的动画。其中,生命值增长量20点即为指定道具对应的指定增益效果。
例如,请参考图14,其示出了本申请一个示例性实施例提供的属性增值动画的示意图,如图14所示,用户界面中显示第一虚拟对象对应的属性槽1410,属性槽1410实现为生命值槽,其中,属性槽1410中对应有生命值50点,作为初始属性值,当第一虚拟对象拾取到指定道具,且指定道具的指定增益效果为“生命值+20点”,则客户端在第一虚拟对象的增益动画的显示过程中(图14中未示出),显示属性槽1410的属性增值动画,其中,属性增值动画表现为属性槽1410中初始生命值“50点”增长至目标生命值“70点”。
在一个可行的示例中,属性槽中属性值用于表征第一虚拟对象通过指定道具所获取的能量值,该能量值可用于获取增益效果,如额外获取一次技能、增加攻击力、增加防御力等。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高 了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在本申请实施例中,在互动活动实现为包括多个活动阶段,且特效文本元素是与活动阶段对应的情况下,针对特效文本元素的显示时长设定指定时长阈值,使得不同活动阶段对应的特效文本元素可以实现共同显示,或者替换显示,丰富了特效文本元素显示方式的多样性。
在本申请实施例中,根据指定道具的数量和类型确定指定增益效果,使得指定增益效果存在多种不同类型的效果,提高了增益效果展示内容的多样性。
在本申请实施例中,通过将属性槽中的属性值随着增益动画同步显示的方式,使得用户能够更深刻体验到指定增益效果的产生过程,有利于提高用户之间进行交互活动的积极性。
在一些实施例中,指定道具不仅对第一虚拟对象产生指定增益效果,还能够对第二虚拟对象产生减益效果的。示意性的,请参考图15,其示出了本申请另一个示例性实施例提供的虚拟对象的交互方法的流程图,如图15所示,该方法包括如下步骤:
步骤1510,响应于指定道具的显示时长达到道具显示阈值,显示第一移动动画,该第一移动动画是指定道具自动向第一虚拟对象移动的动画。
道具显示阈值用于表示指定道具掉落至虚拟场景中以后的显示时间长度。可选地,道具显示阈值可为预设的固定值,或者,用户可以根据实际需要自由调整道具显示阈值的范围,本申请实施例对此不加以限定。
可选地,在指定互动活动中包括多个活动阶段的情况下,道具显示阈值与活动阶段相对应,若当前活动阶段结束,并开始下一个活动阶段,客户端则显示当前活动阶段内的指定道具自动向第一虚拟对象进行移动,如此保证在下个活动阶段内,第一虚拟对象获取上个活动阶段中掉落的指定道具对应的指定增益效果。可选地,指定道具向第一虚拟对象进行自动移动时,即表明第一移动动画已经开始显示。
在一个示例中,在虚拟场景中包括多个指定道具的情况下,第一移动动画的动画表现形式包括如下几种形式中的至少一种:
1.第一移动动画实现为多个指定道具逐个向第一虚拟对象自动移动的动画;
2.第一移动动画实现为多个指定道具同时向第一虚拟对象自动移动的动画。
值得注意的是,上述关于第一移动动画的动画表现形式仅为示意性的举例,本申请实施例对此不加限定。
可选地,在虚拟场景中包括多个指定道具的情况下,多个指定道具向第一虚拟对象对应的同一指定位置进行自动移动,如:多个指定道具向第一虚拟对象的躯干部位进行自动移动;或者,多个指定道具向第一虚拟对象对应的不同指定位置进行自动移动,如:多个指定道具包括道具1、道具2和道具3,其中,道具1向第一虚拟对象的头部进行自动移动,道具2向第一虚拟对象的躯干部位进行自动移动,道具3向第一虚拟对象的腿部进行自动移动。
示意性的,请参考图16,其示出了本申请一个示例性实施例提供的第一移动动画的示意图,如图16所示,当前虚拟场景1600中掉落多个指定道具1610,在指定道具1610的显示时长达到道具显示阈值的情况下,指定道具1610自动向第一虚拟对象1620进行移动,以作为第一移动动画。
在一些示例中,在显示第一移动动画之后,客户端在指定道具的目标位置显示增益选择界面,该增益选择界面中包括指定道具对应的至少两种候选增益效果;响应于针对至少两种候选增益效果中的指定增益效果的触发操作,客户端显示第一虚拟对象对应的增益动画,该增益动画与指定增益效果相对应。
示意性的,单个指定道具对应多种不同类型的候选增益效果,当指定道具向第一虚拟对象自动移动后(或者第一虚拟对象拾取该指定道具),虚拟场景中显示指定道具对应的增益选择界面,增益选择界面中包括至少两种候选增益效果,用户可从候选增益效果中选择指定增益效果,客户端根据该指定增益效果显示第一虚拟对象对应的增益动画。
可选地,响应于针对至少两种候选增益效果中的一个指定增益效果的触发操作,客户端显示该指定增益效果对应的增益动画;或者,响应于针对至少两种候选增益效果中的多个指定增益效果的触发操作,客户端显示多个指定增益效果各自对应的增益效果动画;或者,响应于针对至少两种候选增益效果中的多个指定增益效果的连续触发操作,客户端将该多个指定增益效果结合产生组合增益效果,并显示组合增益效果对应的增益动画,本申请实施例对此不加以限定。
示意性的,请参考图17,其示出了本申请一个示例性实施例提供的增益选择界面的示意图,如图17所示,虚拟场景1700中包括第一虚拟对象1710和掉落的指定道具1720,在指定道具1720的显示时长达到道具显示阈值的情况下,指定道具1720向第一虚拟对象进行自动移动(图17中未示出),当指定道具1720移动至第一虚拟对象1710的躯干部位时,客户端显示增益选择界面1730,其中,增益选择界面1730中把包括至少两种候选增益效果(图17中显示了三种候选增益效果,分别为“生命值+10”、“武力值+5”和“防御值+20”),响应于针对至少两种候选增益效果中的指定增益效果“防御值+20”的触发操作,客户端显示指定增益效果“防御值+20”对应的增益动画1740。
在一些实施例中,客户端响应于指定道具掉落的过程中,指定道具与第一虚拟对象接触,显示第一虚拟对象对应的自动增益动画,该自动增益动画是指第一虚拟对象接触指定道具后,产生与指定道具对应的指定增益效果的动画。
在一些可行实施例中,在特效文本元素转化为指定道具掉落至虚拟场景的过程中,存在在指定道具触碰到第一虚拟对象的情况,如:指定道具掉落的过程中与第一虚拟对象的头部进行接触,则客户端显示第一虚拟对象的自动增益动画。其中,自动增益动画是指指定道具与第一虚拟对象接触后,对第一虚拟对象产生该指定道具对应的指定增益效果的动画。
步骤1520,响应于指定道具的显示时长达到道具显示阈值,显示第二移动动画,该第二移动动画是指定道具自动向第二虚拟对象移动的动画,该指定道具对第二虚拟对象产生减益效果。
可选地,步骤1520中的道具显示阈值可以与步骤1510中的道具显示阈值相同或者不同。指定道具自动向第一虚拟对象移动还是第二虚拟对象动,可以是固定设置的,也可以是随机发生的,本申请实施例对此不加以限定。可选地,指定道具开始向第二虚拟对象进行自动移动时,即表明第二移动动画已开始显示。
在一个示例中,在虚拟场景中包括多个指定道具的情况下,第二移动动画的动画表现形式包括如下几种形式中的至少一种:
1.第二移动动画实现为多个指定道具逐个向第二虚拟对象自动移动的动画;
2.第二移动动画实现为多个指定道具同时向第二虚拟对象自动移动的动画。
值得注意的是,上述关于第二移动动画的动画表现形式仅为示意性的举例,本申请实施例对此不加限定。
在一些实施例中,减益效果与增益效果相反,如:增益效果为生命值+10,则减益效果可实现为生命值-10。
可选地,针对同一个指定道具,在指定道具产生第一移动动画的情况下,指定道具可对第一虚拟对象产生的指定增益效果,在指定道具产生第二移动动画的情况下,指定道具可对第二虚拟对象产生的减益效果,如:若指定道具实现为使第一虚拟对象的生命值+10,则指定道具可实现为使第二虚拟对象的生命值-10;可选地,指定道具对应的增益效果和减益效果也可以不相对应,其可根据实际使用需求进行设置与调整,本申请实施例对此不加以限定。
在一个示例中,在虚拟场景中存在多个指定道具的情况下,多个指定道具对第二虚拟对象整体产生减益效果,如:道具1使第二虚拟对象攻击力-10,道具2使第二虚拟对象防御力-20;或者,在虚拟场景中存在多个指定道具的情况下,多个指定道具可以针对第二虚拟对象的不同部位产生不同的减益效果,本申请实施例对此不作限定。
在一些可行示例中,客户端还响应于道具整合操作,在虚拟场景中显示整合道具,该道具整合操作用于指示选择至少两个指定道具进行整合;其中,客户端响应于道具整合操作,对至少两个指定道具进行整合已生成整合道具,并在虚拟场景中显示该整合道具。
在一些可行示例中,在第一虚拟对象拾取指定道具之前(或者拾取过程中),客户端响应于道具整合操作,将虚拟场景中的至少两个指定道具进行整合,使至少两个指定道具整合为一个单独的整合道具进行显示,也即,当前拾取操作可指示第一虚拟对象拾取该整合道具。其中,整合道具的体积可实现为用于整合的所有指定道具的体积和。
示意性的,请参考图18,其示出了本申请一个示例性实施例提供的道具整合过程的示意图,如图18所示,虚拟场景1800中包括第一虚拟对象1810和多个指定道具(即指定道具1821、指定道具1822和指定道具1823),客户端响应于针对指定道具1822和指定道具1823的道具整合操作,选择指定道具1822和指定道具1823进行整合,得到整合道具1820,客户端在虚拟场景1800中进行整合道具1820的显示。
在一些实施例中,客户端还可接收道具触发操作,该道具触发操作用于触发指定道具向技能范围内释放指定技能效果;客户端在接收到道具触发操作之后,显示技能效果动画,该技能效果动画是指指定道具在技能范围内释放指定技能效果的动画。
其中,指定道具实现为拥有释放指定技能效果的道具,指定技能效果可以包括攻击技能、防御技能等中的至少一种。在本申请实施例中,道具触发操作可实现为控制第一虚拟对象向指定道具进行攻击,当指定道具受到攻击后,显示指定道具在预设的技能范围内释放指定技能效果的动画,以作为技能效果动画。
在一些示例中,客户端还可以在虚拟场景中的指定位置处显示互动回放动画,该互动回放动画是指上述互动活动的回放动画。
示意性的,在指定互动活动中包括多个活动阶段的情况下,每个活动阶段内在第一虚对象和第二虚拟对象之间产生当前活动阶段对应的阶段性互动结果后,若第一虚拟对象和第二虚拟对象开始下一个活动阶段,客户端则在虚拟场景中的指定位置处,显示第一虚拟对象和第二虚拟对象对应上一个活动阶段的回放动画。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在本申请实施例中,将指定道具实现为对应至少两种候选增益效果,通过在至少两种候选增益效果中选择指定增益效果的方式,可以向用户提供更多指定增益效果的可选项,提高用户的交互兴趣。
在本申请实施例中,通过自动增益动画,可实现在掉落的指定道具与第一虚拟对象接触的情况下,指定道具自动对第一虚拟对象产生指定增益效果,从而提高了人机交互效率。
在本申请实施例中,通过道具整合操作可以将多个指定道具整合成整合道具,便于第一虚拟对象进行拾取,提高第一虚拟对象的拾取效率。
在本申请实施例中,通过显示第一移动动画和/或第二移动动画,丰富了指定道具对应的 道具效果。
在本申请实施例中,通过显示互动回放动画,可以使用户针对上一个活动阶段内的互动活动进行回放查看,提高人机交互效率。
在一个示例性实施例中,第一虚拟对象每次对第二虚拟对象进行攻击并命中时,客户端(或服务器)都会进行一次标签判断,如客户端根据当前技能是第几次命中来确定需要显示的特效文本元素,并显示特效文本元素对应的转换掉落动画,请参考图19,其示出了本申请另一个示例性实施例提供的虚拟对象的互动方法的流程图,如图19所示,该方法包括如下步骤:
步骤1910,释放技能。
当前虚拟环境中包括第一虚拟对象和第二虚拟对象,第一虚拟对象和第二虚拟对象在虚拟环境中进行虚拟对局,玩家控制第一虚拟对象向第二虚拟对象释放技能,以对第一虚拟对象进行攻击,即上述的交互活动。
当玩家控制第一虚拟对象向第二虚拟对象释放技能,并命中第二虚拟对象时,客户端首先判断该次命中是否被第二虚拟对象格挡,其中,格挡是指第一虚拟对象释放的技能命中了第二虚拟对象,但是第二虚拟对象使用格挡技能使其不受到技能伤害。若该命中的技能没有被第二虚拟对象格挡,则客户端再判断当前虚拟场景中是否已显示特效文本元素,其中,特效文本元素实现为击中标签。当第一虚拟对象释放技能并命中第二虚拟对象,且该技能命中没有被第二虚拟对象格挡时,确定当前回合内完成一个活动阶段,该活动阶段对应的阶段性互动结果为第一虚拟对象释放技能并命中第二虚拟对象。
步骤1920,当无击中标签时,显示“单押”字体。
虚拟场景中当前不显示任何击中标签,则表明当前回合内尚且不存在第一虚拟对象释放技能击中第二虚拟对象的情况,因此客户端可根据上述步骤1910中的命中情况,在虚拟场景中显示“单押”字体,该“单押”字体用于表示当前回合内第一虚拟对象第一次命中第二虚拟对象,从当前回合开始,到第一次命中,作为当前回合内的第一活动阶段。
步骤1930,当存在“单押”击中标签时,显示“双押”字体。
若虚拟场景中当前显示“单押”击中标签,则表明当前回合内,存在第一虚拟对象释放技能并第一次命中第二虚拟对象的情况,因此客户端可根据上述步骤1910中的命中情况,在虚拟场景中显示“双押”字体,该“双押”字体用于表示当前回合内第一虚拟对象第二次命中第二虚拟对象,从第一次命中到第二次命中的对局过程,作为当前回合内的第二活动阶段。
步骤1940,当存在“双押”击中标签时,显示“三押”字体。
若虚拟场景中当前显示“双押”击中标签,则表明当前回合内,存在第一虚拟对象释放技能并两次命中第二虚拟对象的情况,因此客户端可根据上述步骤1910中的命中情况,在虚拟场景中显示“三押”字体,该“三押”字体用于表示当前回合内第一虚拟对象第三次命中第二虚拟对象,从第二次命中到第三次命中的对局过程,作为当前回合内的第三活动阶段。
步骤1950,结束。
上述互动过程的完成表示当前回合内第一虚拟对象和第二虚拟对象完成对局过程。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
在一个示例性实施例中,每个级别的击中标签都有对应的显示时长,若在该击中标签显示过程中再次造成命中,则显示下一级别击中标签,若显示时长内未能再次造成技能命中,则特效文本元素取消显示,并清空所有击中标签;每次特效文本元素的显示时长达到显示时长阈值后,会转化为能量水晶掉落,第一虚拟对象拾取后,将为第一虚拟对象产生指定增益效果,不同的特效文本元素对应不同数量的指定道具,请参考图20,其示出了本申请另一个示例性实施例提供虚拟对象的互动方法的流程图,如图20所示,该方法包括如下步骤:
步骤2010,技能命中。
虚拟场景中包括第一虚拟对象和第二虚拟对象,响应于互动操作,客户端控制第一虚拟对象和第二虚拟对象在虚拟场景中进行互动活动,其中,互动活动实现为两者进行虚拟对局,当第一虚拟对象向第二虚拟对象释放技能并命中了第二虚拟对象时,客户端判断当前虚拟场景中是否已经存在击中标签。
步骤2020,显示“单押”字体。
当虚拟场景中不存在任何特效文本元素时,也即客户端为显示任何击中标签,客户端在第一虚拟对象的上方显示特效文本元素“单押”字体。
其中,“单押”字体用于表示当前虚拟场景中第一虚拟对象释放技能第一次命中第二虚拟对象的情况。
步骤2021,新增“单押”标签,将“单押”转化为1颗能量水晶掉落。
当虚拟场景中显示特效文本元素“单押”字体后,客户端对应在虚拟场景中增加“单押”标签,并显示转换掉落动画,其中,转换掉落动画实现为将特效文本元素“单押”字体转化成指定道具掉落至虚拟场景中,其中,指定道具实现为1颗能量水晶。
步骤2030,显示“双押”字体。
当虚拟场景中存在特效文本元素“单押”字体时,也即客户端已显示有第一次击中标签“单押”,此时,客户端在第一虚拟对象的上方将特效文本元素“单押”字体替换显示为特效文本元素“双押”字体。
其中,“双押”字体用于表示当前虚拟场景中第一虚拟对象释放技能第二次命中第二虚拟对象的情况。
步骤2031,新增“双押”标签,将“双押”转化为2颗能量水晶掉落。
当虚拟场景中显示特效文本元素“双押”字体后,客户端对应在虚拟场景中增加“双押”标签,并显示转换掉落动画,其中,转换掉落动画实现为将特效文本元素“双押”字体转化成指定道具掉落至虚拟场景中,其中,指定道具实现为2颗能量水晶。
步骤2040,显示“三押”字体。
当虚拟场景中存在特效文本元素“双押”字体时,也即客户端以显示有第二次击中标签“双押”,此时,客户端在第一虚拟对象的上方将特效文本元素“双押”字体替换显示为特效文本元素“三押”字体。
其中,“三押”字体用于表示当前虚拟场景中第一虚拟对象释放技能第三次命中第二虚拟对象的情况。
步骤2041,新增“三押”标签,将“双押”转化为3颗能量水晶掉落。
当虚拟场景中显示特效文本元素“三押”字体后,客户端对应在虚拟场景中增加“三押”标签,并显示转换掉落动画,其中,转换掉落动画实现为将特效文本元素“三押”字体转化成指定道具掉落至虚拟场景中,其中,指定道具实现为3颗能量水晶。
步骤2050,拾取能量水晶。
在能量水晶掉落至虚拟场景中之后,第一虚拟对象和第二虚拟对象均可以拾取能量水晶,其中,能量水晶对于第一虚拟对象和第二虚拟对象产生不同的效果。
步骤2060,第二虚拟对象拾取能量水晶。
在第二虚拟对象拾取道能量水晶的情况下,能量水晶对第二虚拟对象将会产生减益效果; 或者,能量水晶对第二虚拟对象不产生任何效果。
步骤2070,第一虚拟对象拾取能量水晶。
在第一虚拟对象拾取到能量水晶的情况下,能量水晶对第一虚拟对象将会产生指定增益效果。
步骤2080,结束。
完成上述互动过程。
综上所述,本申请实施例提供的虚拟对象的互动方法,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
另外,本申请实施例提供的技术方案,能够增加玩家造成技能命中时的击中反馈,强化了玩家造成技能命中后可获得增益效果的感知,从而提高了用户交互体验。
请参考图21,图21是本申请一个示例性实施例提供的虚拟对象的互动装置的结构框图,如图21所示,该装置可以包括如下部分:显示模块2110和接收模块2120。
显示模块2110,用于显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
接收模块2120,用于响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动。
所述显示模块2110,还用于在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应。
所述显示模块2110,还用于显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
在一些实施例中,所述显示模块2110,包括:显示单元2111。
显示单元2111,用于基于所述指定道具的指定数量显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定数量的指定道具,并掉落至所述虚拟场景中的动画;其中,所述指定数量与所述特效文本元素的文本内容相对应。
在一些实施例中,所述互动活动包括多个活动阶段;
所述显示单元2111,还用于在所述虚拟场景中与所述第二虚拟对象对应的指定位置处,显示与所述阶段性互动结果对应的特效文本元素,所述特效文本元素的文本内容与所述阶段性互动结果相对应,所述阶段性互动结果是指所述活动阶段下的互动结果。
在一些实施例中,所述显示模块2110,还包括:获取单元2112。
获取单元2112,用于获取所述阶段性互动结果对应的所述指定数量。
在一些实施例中,所述显示模块2110,还用于响应于所述特效文本元素的显示时长达到指定时长阈值,显示所述特效文本元素的所述转换掉落动画。
在一些实施例中,所述互动活动包括多个活动阶段,第i个活动阶段对应第i个特效文本元素,i为正整数;
所述显示模块2110,还用于响应于所述第i个特效文本元素的显示时长达到所述指定时长阈值,且在所述指定时长阈值范围内未接收到第i+1个活动阶段对应的阶段性互动结果,显示所述第i个特效文本元素的所述转换掉落动画。
在一些实施例中,所述指定道具对所述第一虚拟对象产生指定增益效果;
所述接收模块2120,还用于响应于拾取操作,控制所述第一虚拟对象在所述虚拟场景中 对所述指定道具进行拾取。
所述显示模块2110,还用于显示所述第一虚拟对象对应的增益动画,所述增益动画是指所述第一虚拟对象拾取所述指定道具后,产生与所述指定道具对应的指定增益效果的动画;其中,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的数量相关;或者,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的类型相关。
在一些实施例中,所述虚拟环境中包括所述第一虚拟对象对应的属性槽,所述属性槽中包括属性值;
所述显示模块2110,还用于显示所述第一虚拟对象对应的属性增值动画作,所述属性增值动画是指所述属性值从初始属性值增长至目标属性值的动画,所述初始属性值和所述目标属性值之间的属性值增长量与所述指定增益效果相关。
在一些实施例中,所述显示模块2110,还用于:
在所述指定道具的目标位置显示增益选择界面,所述增益选择界面中包括所述指定道具对应的至少两种候选增益效果;
响应于针对所述至少两种候选增益效果中的指定增益效果的触发操作,显示所述第一虚拟对象对应的增益动画,所述增益动画与所述指定增益效果相对应。
在一些实施例中,所述显示模块2110,还用于响应于所述指定道具掉落的过程中,所述指定道具与所述第一虚拟对象接触,显示所述第一虚拟对象对应的自动增益动画,所述自动增益动画是指所述第一虚拟对象接触所述指定道具后,产生与所述指定道具对应的指定增益效果的动画。
在一些实施例中,所述接收模块2120,还用于响应于道具整合操作,在所述虚拟场景中显示整合道具,所述道具整合操作用于指示选择至少两个指定道具进行整合。
在一些实施例中,所述接收模块2120,还用于:
接收道具触发操作,所述道具触发操作用于触发所述指定道具向技能范围内释放指定技能效果;
显示技能效果动画,所述技能效果动画是指所述指定道具在所述技能范围内释放所述指定技能效果的动画。
在一些实施例中,所述显示模块2110,还用于响应于所述指定道具的显示时长达到道具显示阈值,显示第一移动动画,所述第一移动动画是所述指定道具自动向所述第一虚拟对象移动的动画。
在一些实施例中,所述显示模块2110,还用于响应于所述指定道具的显示时长达到道具显示阈值,显示第二移动动画,所述第二移动动画是所述指定道具自动向所述第二虚拟对象移动的动画,所述指定道具对所述第二虚拟对象产生减益效果。
在一些实施例中,所述显示模块2110,还用于:
显示所述特效文本元素的收缩消失动画,所述收缩消失动画是指所述特效文本元素在与所述第二虚拟对象对应的指定位置处缩小后取消显示的动画;
获取所述指定位置在所述虚拟场景对应的世界坐标系中的第一坐标,以作为所述指定道具的起始坐标;
获取所述世界坐标系中与所述第一坐标对应的第二坐标,以作为所述指定道具的落地坐标;
基于所述第一坐标和所述第二坐标获取所述指定道具的掉落路径数据;
根据所述掉落路径数据显示所述指定道具掉落的转换掉落动画。
在一些实施例中,所述显示模块2110,还用于:
获取所述指定道具对应的纹理素材集;
基于所述指定道具的观察视角,从所述纹理素材集中获取与所述观察视角对应的纹理素材图像;
沿所述掉落路径数据对应的掉落轨迹显示所述纹理素材图像。
综上所述,本申请实施例提供的虚拟对象的互动装置,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
需要说明的是:上述实施例提供的虚拟对象的互动装置仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟对象的互动装置与虚拟对象的互动方法实施例属于同一构思,其具体实现过程详见方法实施例,此处不再赘述。
在一些实施例中,本申请实施例还可以包括如下内容:
1.一种虚拟对象的互动方法,所述方法由终端设备执行,所述方法包括:
显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动;
在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应;
显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
2.根据权利要求1所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
基于所述指定道具的指定数量显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定数量的指定道具,并掉落至所述虚拟场景中的动画;
其中,所述指定数量与所述特效文本元素的文本内容相对应。
3.根据权利要求1至2任一项所述的方法,其中,所述互动活动包括多个活动阶段;
所述在所述虚拟场景中显示特效文本元素,包括:
在所述虚拟场景中与所述第二虚拟对象对应的指定位置处,显示与所述阶段性互动结果对应的特效文本元素,所述特效文本元素的文本内容与所述阶段性互动结果相对应,所述阶段性互动结果是指所述活动阶段下的互动结果。
4.根据权利要求1至3任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
响应于所述特效文本元素的显示时长达到指定时长阈值,显示所述特效文本元素的所述转换掉落动画。
5.根据权利要求4所述的方法,其中,所述互动活动包括多个活动阶段,第i个活动阶段对应第i个特效文本元素,i为正整数;
所述响应于所述特效文本元素的显示时长达到指定时长阈值,显示所述特效文本元素的所述转换掉落动画,包括:
响应于所述第i个特效文本元素的显示时长达到所述指定时长阈值,且在所述指定时长阈值范围内未接收到第i+1个活动阶段对应的阶段性互动结果,显示所述第i个特效文本元素的所述转换掉落动画。
6.根据权利要求1至5任一所述的方法,其中,所述指定道具对所述第一虚拟对象产生指定增益效果;
所述显示所述特效文本元素的转换掉落动画之后,还包括:
响应于拾取操作,控制所述第一虚拟对象在所述虚拟场景中对所述指定道具进行拾取;
显示所述第一虚拟对象对应的增益动画,所述增益动画是指所述第一虚拟对象拾取所述指定道具后,产生与所述指定道具对应的指定增益效果的动画;
其中,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的数量相关;或者,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的类型相关。
7.根据权利要求1至6任一项所述的方法,其中,所述虚拟环境中包括所述第一虚拟对象对应的属性槽,所述属性槽中包括属性值;
所述显示所述第一虚拟对象对应的增益动画,包括:
显示所述第一虚拟对象对应的属性增值动画作,所述属性增值动画是指所述属性值从初始属性值增长至目标属性值的动画,所述初始属性值和所述目标属性值之间的属性值增长量与所述指定增益效果相关。
8.根据权利要求1至7任一所述的方法,其中,所述方法还包括:
在所述指定道具的目标位置显示增益选择界面,所述增益选择界面中包括所述指定道具对应的至少两种候选增益效果;
响应于针对所述至少两种候选增益效果中的指定增益效果的触发操作,显示所述第一虚拟对象对应的增益动画,所述增益动画与所述指定增益效果相对应。
9.根据权利要求1至8任一所述的方法,其中,所述方法还包括:
响应于所述指定道具掉落的过程中,所述指定道具与所述第一虚拟对象接触,显示所述第一虚拟对象对应的自动增益动画,所述自动增益动画是指所述第一虚拟对象接触所述指定道具后,产生与所述指定道具对应的指定增益效果的动画。
10.根据权利要求1至9任一所述的方法,其中,所述方法还包括:
响应于道具整合操作,在所述虚拟场景中显示整合道具,所述道具整合操作用于指示选择至少两个指定道具进行整合。
11.根据权利要求1至10任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画之后,还包括:
接收道具触发操作,所述道具触发操作用于触发所述指定道具向技能范围内释放指定技能效果;
显示技能效果动画,所述技能效果动画是指所述指定道具在所述技能范围内释放所述指定技能效果的动画。
12.根据权利要求1至11任一所述的方法,其中,所述方法还包括:
响应于所述指定道具的显示时长达到道具显示阈值,显示第一移动动画,所述第一移动动画是所述指定道具自动向所述第一虚拟对象移动的动画。
13.根据权利要求1至12任一所述的方法,其中,所述方法还包括:
响应于所述指定道具的显示时长达到道具显示阈值,显示第二移动动画,所述第二移动动画是所述指定道具自动向所述第二虚拟对象移动的动画,所述指定道具对所述第二虚拟对象产生减益效果。
14.根据权利要求1至13任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
显示所述特效文本元素的收缩消失动画,所述收缩消失动画是指所述特效文本元素在与所述第二虚拟对象对应的指定位置处缩小后取消显示的动画;
获取所述指定位置在所述虚拟场景对应的世界坐标系中的第一坐标,以作为所述指定道具的起始坐标;
获取所述世界坐标系中与所述第一坐标对应的第二坐标,以作为所述指定道具的落地坐标;
基于所述第一坐标和所述第二坐标获取所述指定道具的掉落路径数据;
根据所述掉落路径数据显示所述指定道具掉落的转换掉落动画。
15.根据权利要求14所述的方法,其中,所述根据所述掉落路径数据显示所述指定道具掉落的转换掉落动画,包括:
获取所述指定道具对应的纹理素材集;
基于所述指定道具的观察视角,从所述纹理素材集中获取与所述观察视角对应的纹理素材图像;
沿所述掉落路径数据对应的掉落轨迹显示所述纹理素材图像。
综上所述,本申请实施例提供的虚拟对象的互动装置,通过在第一虚拟对象和第二虚拟对象进行互动活动的过程中,根据互动结果显示特效文本元素,并以转换掉落动画的形式将特效文本元素转换为指定道具,使得互动结果、以及互动结果的反哺收益可视化,从而提高了用户交互体验,以及增加了虚拟对象之间的互动方式的多样性。此外,通过将特效文本元素转换成指定道具提供给虚拟对象的方式,能够提高界面显示信息的传递效率。另外,通过将互动结果转换为指定道具,有利于激发虚拟对象之间进行交互,从而提高虚拟对象之间的互动性,这也有助于缩短交互活动(如游戏对局)的时间,进而降低游戏对局对终端设备和服务器的处理资源的占用需求。
图23示出了本申请一个示例性实施例提供的终端设备2300的结构框图。该终端设备2300可以是:智能手机、平板电脑、MP3播放器、MP4播放器、笔记本电脑或台式电脑。终端设备2300还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端设备2300包括有:处理器2301和存储器2302。
处理器2301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器2301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器2301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器2302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2302中的非暂态的计算机可读存储介质用于存储计算机程序,该计算机程序用于被处理器2301所执行以实现本申请中方法实施例提供的虚拟对象的互动方法。
在一些实施例中,终端设备2300还包括其他组件,本领域技术人员可以理解,图23中示出的结构并不构成对终端设备2300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于计算机可读存储介质中,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端设备中的计算机可读存储介质。该计算机可读存储介质中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述实施例中任一所述的虚拟对象的互动方 法。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
可选地,还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中。终端设备的处理器从所述计算机可读存储介质中读取所述计算机程序,所述处理器执行所述计算机程序,使得所述终端设备执行上述实施例中任一所述的虚拟对象的互动方法。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (19)

  1. 一种虚拟对象的互动方法,所述方法由终端设备执行,所述方法包括:
    显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
    响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动;
    在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应;
    显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
  2. 根据权利要求1所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
    基于所述指定道具的指定数量显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定数量的指定道具,并掉落至所述虚拟场景中的动画;
    其中,所述指定数量与所述特效文本元素的文本内容相对应。
  3. 根据权利要求1所述的方法,其中,所述互动活动包括多个活动阶段;
    所述在所述虚拟场景中显示特效文本元素,包括:
    在所述虚拟场景中与所述第二虚拟对象对应的指定位置处,显示与所述阶段性互动结果对应的特效文本元素,所述特效文本元素的文本内容与所述阶段性互动结果相对应,所述阶段性互动结果是指所述活动阶段下的互动结果。
  4. 根据权利要求1至3任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
    响应于所述特效文本元素的显示时长达到指定时长阈值,显示所述特效文本元素的所述转换掉落动画。
  5. 根据权利要求4所述的方法,其中,所述互动活动包括多个活动阶段,第i个活动阶段对应第i个特效文本元素,i为正整数;
    所述响应于所述特效文本元素的显示时长达到指定时长阈值,显示所述特效文本元素的所述转换掉落动画,包括:
    响应于所述第i个特效文本元素的显示时长达到所述指定时长阈值,且在所述指定时长阈值范围内未接收到第i+1个活动阶段对应的阶段性互动结果,显示所述第i个特效文本元素的所述转换掉落动画。
  6. 根据权利要求1至3任一所述的方法,其中,所述指定道具对所述第一虚拟对象产生指定增益效果;
    所述显示所述特效文本元素的转换掉落动画之后,还包括:
    响应于拾取操作,控制所述第一虚拟对象在所述虚拟场景中对所述指定道具进行拾取;
    显示所述第一虚拟对象对应的增益动画,所述增益动画是指所述第一虚拟对象拾取所述指定道具后,产生与所述指定道具对应的指定增益效果的动画;
    其中,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的数量相关;或者,所述指定增益效果与所述第一虚拟对象所拾取的所述指定道具的类型相关。
  7. 根据权利要求6所述的方法,其中,所述虚拟环境中包括所述第一虚拟对象对应的属性槽,所述属性槽中包括属性值;
    所述显示所述第一虚拟对象对应的增益动画,包括:
    显示所述第一虚拟对象对应的属性增值动画作,所述属性增值动画是指所述属性值从初始属性值增长至目标属性值的动画,所述初始属性值和所述目标属性值之间的属性值增长量与所述指定增益效果相关。
  8. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    在所述指定道具的目标位置显示增益选择界面,所述增益选择界面中包括所述指定道具对应的至少两种候选增益效果;
    响应于针对所述至少两种候选增益效果中的指定增益效果的触发操作,显示所述第一虚拟对象对应的增益动画,所述增益动画与所述指定增益效果相对应。
  9. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    响应于所述指定道具掉落的过程中,所述指定道具与所述第一虚拟对象接触,显示所述第一虚拟对象对应的自动增益动画,所述自动增益动画是指所述第一虚拟对象接触所述指定道具后,产生与所述指定道具对应的指定增益效果的动画。
  10. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    响应于道具整合操作,在所述虚拟场景中显示整合道具,所述道具整合操作用于指示选择至少两个指定道具进行整合。
  11. 根据权利要求1至3任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画之后,还包括:
    接收道具触发操作,所述道具触发操作用于触发所述指定道具向技能范围内释放指定技能效果;
    显示技能效果动画,所述技能效果动画是指所述指定道具在所述技能范围内释放所述指定技能效果的动画。
  12. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    响应于所述指定道具的显示时长达到道具显示阈值,显示第一移动动画,所述第一移动动画是所述指定道具自动向所述第一虚拟对象移动的动画。
  13. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    响应于所述指定道具的显示时长达到道具显示阈值,显示第二移动动画,所述第二移动动画是所述指定道具自动向所述第二虚拟对象移动的动画,所述指定道具对所述第二虚拟对象产生减益效果。
  14. 根据权利要求1至3任一所述的方法,其中,所述显示所述特效文本元素的转换掉落动画,包括:
    显示所述特效文本元素的收缩消失动画,所述收缩消失动画是指所述特效文本元素在与所述第二虚拟对象对应的指定位置处缩小后取消显示的动画;
    获取所述指定位置在所述虚拟场景对应的世界坐标系中的第一坐标,以作为所述指定道具的起始坐标;
    获取所述世界坐标系中与所述第一坐标对应的第二坐标,以作为所述指定道具的落地坐标;
    基于所述第一坐标和所述第二坐标获取所述指定道具的掉落路径数据;
    根据所述掉落路径数据显示所述指定道具掉落的转换掉落动画。
  15. 根据权利要求14所述的方法,其中,所述根据所述掉落路径数据显示所述指定道具掉落的转换掉落动画,包括:
    获取所述指定道具对应的纹理素材集;
    基于所述指定道具的观察视角,从所述纹理素材集中获取与所述观察视角对应的纹理素材图像;
    沿所述掉落路径数据对应的掉落轨迹显示所述纹理素材图像。
  16. 一种虚拟对象的互动装置,所述装置包括:
    显示模块,用于显示处于虚拟场景中的第一虚拟对象和第二虚拟对象;
    接收模块,用于响应于互动操作,控制所述第一虚拟对象与所述第二虚拟对象在所述虚拟场景中进行互动活动;
    所述显示模块,还用于在所述虚拟场景中显示特效文本元素,所述特效文本元素与所述第一虚拟对象和所述第二虚拟对象之间的互动结果相对应;
    所述显示模块,还用于显示所述特效文本元素的转换掉落动画,所述转换掉落动画是指所述特效文本元素转化为指定道具,并掉落至所述虚拟场景中的动画。
  17. 一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至15任一所述的虚拟对象的互动方法。
  18. 一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至15任一所述的虚拟对象的互动方法。
  19. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如权利要求1至15任一所述的虚拟对象的互动方法。
PCT/CN2023/085788 2022-05-31 2023-03-31 虚拟对象的互动方法、装置、设备、存储介质及程序产品 WO2023231557A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210611101.7 2022-05-31
CN202210611101.7A CN116983638A (zh) 2022-05-31 2022-05-31 虚拟对象的互动方法、装置、设备、存储介质及程序产品

Publications (2)

Publication Number Publication Date
WO2023231557A1 true WO2023231557A1 (zh) 2023-12-07
WO2023231557A9 WO2023231557A9 (zh) 2024-01-18

Family

ID=88532725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085788 WO2023231557A1 (zh) 2022-05-31 2023-03-31 虚拟对象的互动方法、装置、设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN116983638A (zh)
WO (1) WO2023231557A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016168362A (ja) * 2016-05-25 2016-09-23 株式会社スクウェア・エニックス ゲーム装置及びゲームプログラム
CN110898433A (zh) * 2019-11-28 2020-03-24 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子设备及存储介质
JP2021010477A (ja) * 2019-07-04 2021-02-04 株式会社コーエーテクモゲームス ゲームプログラム、ゲーム処理方法、情報処理装置
CN112973117A (zh) * 2021-04-15 2021-06-18 腾讯科技(深圳)有限公司 虚拟对象的交互方法、奖励发放方法、装置、设备及介质
CN114225406A (zh) * 2021-12-02 2022-03-25 腾讯科技(深圳)有限公司 虚拟道具控制方法、装置、计算机设备及存储介质
CN114272617A (zh) * 2021-11-18 2022-04-05 腾讯科技(深圳)有限公司 虚拟场景中的虚拟资源处理方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016168362A (ja) * 2016-05-25 2016-09-23 株式会社スクウェア・エニックス ゲーム装置及びゲームプログラム
JP2021010477A (ja) * 2019-07-04 2021-02-04 株式会社コーエーテクモゲームス ゲームプログラム、ゲーム処理方法、情報処理装置
CN110898433A (zh) * 2019-11-28 2020-03-24 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子设备及存储介质
CN112973117A (zh) * 2021-04-15 2021-06-18 腾讯科技(深圳)有限公司 虚拟对象的交互方法、奖励发放方法、装置、设备及介质
CN114272617A (zh) * 2021-11-18 2022-04-05 腾讯科技(深圳)有限公司 虚拟场景中的虚拟资源处理方法、装置、设备及存储介质
CN114225406A (zh) * 2021-12-02 2022-03-25 腾讯科技(深圳)有限公司 虚拟道具控制方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN116983638A (zh) 2023-11-03
WO2023231557A9 (zh) 2024-01-18

Similar Documents

Publication Publication Date Title
KR102292931B1 (ko) 객체를 제어하는 방법 및 장치
CN112691377B (zh) 虚拟角色的控制方法、装置、电子设备及存储介质
US20220152496A1 (en) Virtual object control method and apparatus, terminal, and storage medium
US7843455B2 (en) Interactive animation
CN110548288B (zh) 虚拟对象的受击提示方法、装置、终端及存储介质
US20230336792A1 (en) Display method and apparatus for event livestreaming, device and storage medium
CN113398601B (zh) 信息发送方法、信息发送装置、计算机可读介质及设备
WO2022184128A1 (zh) 虚拟对象的技能释放方法、装置、设备及存储介质
CN111760282A (zh) 界面显示方法、装置、终端及存储介质
KR102432011B1 (ko) 게임 애플리케이션의 사용자 인터페이스 요소를 햅틱 피드백으로 트랜스크라이빙하기 위한 시스템 및 방법
WO2022242021A1 (zh) 多人在线对战程序中的消息发送方法、装置、终端及介质
CN110801629B (zh) 虚拟对象生命值提示图形的显示方法、装置、终端及介质
CN111905363A (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN114377396A (zh) 一种游戏数据的处理方法、装置、电子设备及存储介质
KR101404635B1 (ko) 온라인 게임에서의 드래그 입력 처리 방법
WO2023024880A1 (zh) 虚拟场景中的表情显示方法、装置、设备以及介质
WO2023231557A1 (zh) 虚拟对象的互动方法、装置、设备、存储介质及程序产品
JP2024506920A (ja) 仮想対象の制御方法、装置、機器、及びプログラム
CN113952739A (zh) 游戏数据的处理方法、装置、电子设备及可读存储介质
KR101226765B1 (ko) 온라인 슈팅 게임 제공 방법 및 그 시스템
WO2024027304A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及程序产品
WO2024060879A1 (zh) 虚拟场景的效果显示方法、装置、设备、介质及程序产品
WO2024060914A1 (zh) 虚拟对象的生成方法、装置、设备、介质和程序产品
CN112843682B (zh) 数据同步方法、装置、设备及存储介质
WO2024067168A1 (zh) 基于社交场景的消息显示方法、装置、设备、介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814753

Country of ref document: EP

Kind code of ref document: A1