CN110141859B - Virtual object control method, device, terminal and storage medium - Google Patents

Virtual object control method, device, terminal and storage medium Download PDF

Info

Publication number
CN110141859B
CN110141859B CN201910453477.8A CN201910453477A CN110141859B CN 110141859 B CN110141859 B CN 110141859B CN 201910453477 A CN201910453477 A CN 201910453477A CN 110141859 B CN110141859 B CN 110141859B
Authority
CN
China
Prior art keywords
action
virtual object
target
virtual
virtual objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910453477.8A
Other languages
Chinese (zh)
Other versions
CN110141859A (en
Inventor
路鹏
许东松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910453477.8A priority Critical patent/CN110141859B/en
Publication of CN110141859A publication Critical patent/CN110141859A/en
Application granted granted Critical
Publication of CN110141859B publication Critical patent/CN110141859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a virtual object control method, a virtual object control device, a virtual object control terminal and a storage medium, and belongs to the technical field of networks. The method comprises the following steps: displaying a virtual scene interface, controlling a first virtual object controlled by a first user to execute a first interactive action according to a first action execution instruction of the first user, and sending a first action execution instruction to a server; and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, and controlling at least one second virtual object to execute a second interaction action associated with the first response action, wherein the at least one second virtual object is a virtual object controlled by at least one second user in the target group. According to the technical scheme, the association among the virtual objects controlled by different users of the same team is increased, so that the virtual objects controlled by different users are associated, the interaction among teammates during the game of the team is reflected, and the playing method of the team game is enriched.

Description

Virtual object control method, device, terminal and storage medium
Technical Field
The present invention relates to the field of network technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for controlling a virtual object.
Background
The turn-based network game is one of network games, wherein players take turns to act in the turn-based game to sequentially attack enemies, for example, three players A, B and C are provided on my side, the hero of the player A can be manipulated to attack the hero of the enemy when the player A acts, the player B acts after the action of the player A is finished, and the hero of the player B can be manipulated to attack the hero of the enemy until one player fails. The problem of the above game process is that each player can only operate the hero to fight, and the players fight respectively, and the fighting mode is single.
Currently, to increase the interaction between players in turn-based network games, players are given different combat positions for their manipulation of heros, such as different heros or different skills. The hero comprises an object attacking type hero, a legal attacking type hero, a control type hero and the like, the skills comprise fighting skills, auxiliary skills, control skills and the like, the player is encouraged to select different heros to form a team for game, and fighting modes are enriched through the characteristics that the heros positioned in different fighting modes can be complemented.
The problem with the above scheme is that although different combat positions with heros are added, when the player operates heros to fight, actions between heros are still independent, and the action due to team playing is not reflected.
Disclosure of Invention
The embodiment of the invention provides a virtual object control method, a virtual object control device, a virtual object control terminal and a virtual object control storage medium, which are used for solving the problems that actions among virtual objects in a current turn-based online game are independent from each other and the due action of a team for playing the game is not reflected. The technical scheme is as follows:
in one aspect, a virtual object control method is provided, and the method includes:
displaying a virtual scene interface, wherein the virtual scene interface comprises a target virtual object and a virtual object controlled by at least one user in a target group;
controlling a first virtual object controlled by a first user to execute a first interactive action according to a first action execution instruction of the first user in the target group, and sending the first action execution instruction to a server;
and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, and controlling at least one second virtual object to execute a second interaction action associated with the first response action, wherein the at least one second virtual object is a virtual object controlled by at least one second user in the target group.
In another aspect, there is provided a virtual object control apparatus, the apparatus including:
the system comprises a display module, a display module and a control module, wherein the display module is used for displaying a virtual scene interface, and the virtual scene interface comprises a target virtual object and a virtual object controlled by at least one user in a target group;
the control module is used for controlling a first virtual object controlled by a first user to execute a first interactive action according to a first action execution instruction of the first user in the target group, and sending the first action execution instruction to a server;
the control module is further configured to control the target virtual object to execute a first response action of the first interaction action according to a second action execution instruction returned by the server based on the first action execution instruction, and control at least one second virtual object to execute a second interaction action associated with the first response action, where the at least one second virtual object is a virtual object controlled by at least one second user in the target group.
In a possible implementation manner, the second action execution instruction is generated under the condition that the first action execution instruction satisfies a first target condition.
In another possible implementation manner, the at least one second virtual object is a virtual object having a target action attribute, and the target action attribute is an action attribute associated with the second interactive action.
In another possible implementation manner, the determining of the at least one second virtual object includes:
acquiring the action attribute of a virtual object controlled by at least one user in the target group;
and when the action attribute of any virtual object comprises the target action attribute, taking the virtual object as the second virtual object.
In another possible implementation manner, a type of the second interactive action performed by each of the at least one second virtual object is different, and the type of the second interactive action corresponds to the target action attribute.
In another possible implementation, the at least one second virtual object further has at least one of the following characteristics:
the current anger value of the at least one second virtual object is greater than a first threshold;
the probability of execution of the second interactive action of the at least one second virtual object is greater than a second threshold;
the current life value of the at least one second virtual object is greater than a third threshold;
the at least one second virtual object is more than a fourth threshold from the last time the second interactive action was performed.
In another possible implementation manner, the target virtual object is a plurality of virtual objects; the control module is further configured to control the at least one second virtual object to perform a second interactive action associated with the first response action on any virtual object in the target virtual objects.
In another possible implementation manner, the control module is further configured to control the at least one second virtual object to simultaneously perform a second interactive action associated with the first response action.
In another possible implementation manner, the control module is further configured to control the at least one second virtual object to execute a second interaction action associated with the first response action according to a target precedence order.
In another possible implementation manner, the control module is further configured to control a second virtual object, except a second virtual object before a last in the target precedence order, in the at least one second virtual object to execute an original edition action of the second interaction action; and controlling a second virtual object positioned at the tail end of the target sequence in the at least one second virtual object to execute the deformation action of the second interaction action.
In another possible implementation manner, the determining process of the target sequence includes:
and sequencing according to the anger value of the at least one second virtual object or the execution probability of the second interaction action from large to small to obtain the target sequence.
In another aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the instruction, the program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the operations performed in the virtual object control method in the embodiments of the present invention.
In another aspect, a storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the instruction, the program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the operations performed in the virtual object control method according to the embodiment of the present invention.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a virtual scene interface is displayed, a first virtual object controlled by a first user is controlled to execute a first interactive action according to a first action execution instruction of the first user, and the first action execution instruction is sent to a server; and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, and controlling at least one second virtual object to execute a second interaction action associated with the first response action, wherein the at least one second virtual object is a virtual object controlled by at least one second user in the target group. When any user controls a first virtual object controlled by the user to execute a first interactive action through a first action execution instruction, second virtual objects controlled by other users of the same team can perform a second interactive action controlled by the terminal, and the second interactive action is executed when the first interactive action is executed, so that the relation among virtual objects controlled by different users of the same team is increased, the virtual objects controlled by different users are correlated, the interaction among team members during the game of the team is reflected, and the playing method of the team game is enriched.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a virtual scene provided in accordance with an embodiment of the present invention;
FIG. 2 is a diagram of an implementation environment of a virtual object control method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for controlling a virtual object according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a virtual scene interface provided in accordance with an embodiment of the present invention;
FIG. 5 is a diagram illustrating a virtual object performing an action according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another embodiment of a virtual object performing an action;
FIG. 7 is a schematic diagram of another embodiment of a virtual object performing an action;
FIG. 8 is a schematic diagram of another embodiment of a virtual object performing an action;
FIG. 9 is a schematic diagram of another embodiment of a virtual object performing an action;
FIG. 10 is a schematic diagram of another embodiment of a virtual object execution action provided in accordance with the invention;
FIG. 11 is a diagram of the relationship between a virtual object action and an action resource according to an embodiment of the present invention;
fig. 12 is a block diagram of a virtual object control apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention mainly relates to an online game scene or a simulated confrontation scene and the like, in the embodiment of the invention, a turn-based online game is used for explaining, in the turn-based online game, users participating in the game can be divided into two camps, each camps is provided with at least one user, in the game process, the users of the two camps acquire action authority in turn, each user controls one or more virtual objects, when the users have the action authority, the virtual objects can be controlled to initiate attack and also can be controlled to recover, defense can be carried out, and when the users do not have the action authority, the virtual objects controlled by other users and reverse attack, recovery, defense and the like can be watched.
The terminal can download a game configuration file of the turn-based network game, and the game configuration file can include an application program, interface display data, virtual scene data and the like of the turn-based network game, so that the user can call the game configuration file when logging in the turn-based network game on the terminal to render and display a game interface of the turn-based network game. The user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include virtual scene data, motion data of a virtual object in the virtual scene, model data of the virtual object in the virtual scene, and the like.
The virtual scene related by the invention can be used for simulating a three-dimensional virtual space and can also be used for simulating a two-dimensional virtual space. The virtual scene may be used to simulate an environment for fighting, for example, the virtual scene may include sky, land, sea, etc., the land may include environmental elements such as desert, city, etc., the user may control a virtual object to fight in the virtual scene, the virtual object may be an avatar in the virtual scene for representing the user, the avatar may be any form, such as human, animal, or robot, etc., and the present invention is not limited thereto. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
In the turn-based network game, a user can control the virtual object to fight in the virtual scene, and the fighting modes of the virtual object comprise common attack, skill attack, pet calling assistance attack, prop attack and the like. A virtual object that battles with the virtual object may be referred to as a hostile virtual object, and the hostile virtual object may be a virtual object controlled by another user or a virtual object controlled by an AI (Artificial Intelligence). The virtual object can have various combat attributes such as a life value, a magic value, an anger value, an attack power, a defensive power, abnormal state resistance and the like, the life value can be used for indicating whether the virtual object can continue to participate in combat, and the virtual object can not participate in combat after the life value is zero; the magic value is used for indicating whether the virtual object can release the skill, a certain magic value can be consumed when the skill is released, and the skill cannot be released when the magic value is null or smaller than the magic value required by the skill release; anger values are used to release a particular type of skill, which may be understood as another type of magic value; the attacking force is used for indicating the injury which can be caused by the virtual object, and the value of the injury is used for deducting the current life value of the enemy virtual object; the defensive force user indicates the injury which the virtual object can defend, the injury generally corresponds to the attacking force, and the difference value between the attacking force and the defensive force of the enemy is the value of the injury caused generally; abnormal state resistance is used to indicate defense against abnormal states, which are usually floating, falling, dizziness, intoxication, inability to release skills, and the like. The user can cultivate the virtual object controlled by the user, including increasing the level of the virtual object, increasing the skill level, configuring combat equipment, configuring pets, configuring appearance decorations and the like, wherein the level of the virtual object and the combat equipment generally affect the attributes of the virtual object, such as the attack power, the defense power, the magic value and the like, the increase of the skill level can increase the attack power of the skill, but the release of the skill consumes more magic value or angry value.
For example, a user may control the virtual object to fight alone, or may form a team with other users to control the virtual object to fight together with the virtual object of the other users. When a team fights, users in the team operate the virtual objects controlled by the team in turn, when any user obtains the operation right, the user can choose to run away or fight, when the user chooses to fight, the user can control the virtual object to launch at least one of common attack, skill attack, prop attack and pet attack on the virtual object by an enemy, a certain amount of magic value is usually consumed when the skill attack is launched, a certain amount of damage is caused to the virtual object by the enemy, and the life value of the enemy to the virtual object is reduced to zero. The virtual object can accumulate anger value when attacking or being attacked by an enemy virtual object, and when the anger value reaches a certain value, the user can control the virtual object to release special skills, wherein the special skills can be skills for causing a large amount of injuries or abnormal state effects. When the life values of all the enemy virtual objects are zero, the battle victory is represented, and when the life values of the virtual objects controlled by the user and the virtual objects of other users in the team are zero, the battle fails.
When the terminal renders and displays the virtual scene, the terminal can display the virtual scene in a full screen mode, the terminal can display the virtual scene on a current display interface and simultaneously display virtual object information of a virtual object controlled by a terminal user in a first preset area of the current display interface, the terminal can display a plurality of interactive buttons in a second preset area of the current display interface, and the terminal can also display a message notification bar in a third preset area displayed on the current display interface. The virtual object information may include an avatar, a level, a life value, a magic value, an anger value, an abnormal state, and the like of the virtual object. The plurality of interactive buttons may include a general attack button, a magic attack button to consume magic values, a trick attack button to consume angry values, a pet attack button, a defense button, and the like. The message notification bar can be used for displaying battle messages, such as a certain virtual object using a certain skill to cause a certain numerical damage to a certain enemy virtual object, can also be used for displaying chat messages, such as at least one of chat messages sent by teammates and chat messages sent by enemy virtual objects, and can also be used for displaying system broadcast messages and the like. The first preset area may be a rectangular area at the upper right corner, the lower right corner, the upper left corner or the lower left corner of the current display interface, the second preset area may be a rectangular area at the right side or the left side of the current display interface, and the third preset area may be a rectangular area at the upper right corner, the lower right corner, the upper left corner or the lower left corner of the current display interface, and of course, the first preset area, the second preset area and the third preset area may not be covered with each other. It should be noted that the first preset area, the second preset area, and the third preset area may also be circular areas or areas with other shapes, and the specific display position and shape of the preset area are not limited in the embodiment of the present invention. For example, as shown in fig. 1, the terminal displays a virtual scene on the current display interface, a virtual object may be displayed in the virtual scene, virtual object information is displayed in the upper right corner of the current display interface, a plurality of interactive buttons are displayed in the right and lower rectangular positions of the current display interface, and a message notification bar is displayed in the lower left corner of the current display interface.
Fig. 2 is an implementation environment diagram of a virtual object control method according to an embodiment of the present invention, and referring to fig. 2, the virtual object control method includes a terminal 201 and a server 202.
The terminal 201 may be connected to the server 202 through a wireless network or a wired network. The terminal 201 may be at least one of a smartphone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The terminal 201 installs and runs a game configuration file of the above-mentioned turn-based network game, the game configuration file includes an action resource library of the virtual object, and the action resource library may have an action resource in which the virtual object can execute the action. The server 202 may include at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 202 is used to provide background services for turn-based network games.
After detecting the touch operation of the user, the terminal 201 sends an action execution instruction corresponding to the touch operation to the server 202, and according to the action execution instruction, invokes an action resource corresponding to the action execution instruction from an action resource library, and controls a virtual object controlled by the user of the terminal 201 to execute an interactive action. After receiving the action execution instruction, the server 202 generates an action response instruction according to the action execution instruction, and the terminal 201 calls an action resource according to the action response instruction to control the target virtual object to execute a response action.
Fig. 3 is a flowchart of a virtual object control method according to an embodiment of the present invention. Taking an interaction process between a terminal and a server as an example for explanation, referring to fig. 3, the method includes the following steps:
301. and the terminal displays a virtual scene interface.
In the embodiment of the present invention, the terminal may display a virtual scene interface, where the virtual scene interface may be an interface for a virtual object currently displayed by the terminal to fight, and the virtual scene interface includes a target virtual object and a plurality of virtual objects controlled by at least one user in a target group.
The target virtual object may be a virtual object controlled by a user in another group, or multiple virtual objects controlled by multiple users in another group. The target group may be a group in which a current user of the terminal is located, where the group includes multiple users, and each user controls one or more virtual objects. For example, in some game scenarios, the group may be called a team, the target group is my team, the other group may be a group that is enemy to the target group, and the other group may also be called an enemy team. The virtual object controlled by the current user of the terminal may be referred to as a first user, the virtual object controlled by the first user may be referred to as a first virtual object, the virtual objects controlled by other users in the team of my party may be referred to as friend-side virtual objects, and the virtual objects controlled by users in the team of the opponent party belong to the opponent virtual object, which is referred to as a target virtual object in this embodiment.
Fig. 4 is a schematic diagram of a virtual scene interface according to an embodiment of the present invention, and referring to fig. 4, an enemy team includes 3 virtual objects, which are a target virtual object 1, a target virtual object 2, and a target virtual object 3, respectively, and are located at a left end of a virtual scene; the team of our party includes 3 virtual objects, which are respectively a virtual object a, a virtual object B and a virtual object C, and are located at the right end of the virtual scene, wherein the virtual object a is a virtual object controlled by the current terminal user.
302. The terminal receives a first action execution instruction of a first user in the target group, controls a first virtual object controlled by the first user to execute a first interaction action, and sends the first action execution instruction to the server.
In this step 302, the first user is the current user of the terminal. For the terminal, different instructions can be triggered based on different triggers of the user, and the instructions can be at least one of general attack instructions, skill attack instructions, special attack instructions, prop attack instructions and defense instructions.
In an embodiment of the invention, the first action execution instruction may be a trick attack instruction. The trigger option of the trick attack instruction and the trigger options of other instructions can be displayed in the same function selection interface, and of course, can also be displayed in different function selection interfaces. When the target attribute of the first user meets a preset condition or the first user redeems the skill, a trigger option of the trick attack instruction can be provided for the first user, and when the terminal detects a touch operation on the trigger option, the first action execution instruction can be triggered. The execution target of the first motion execution command may be a virtual object specified by the first user in an enemy team, or may be a random virtual object in the enemy team.
The first interaction action can be an interaction action corresponding to the first action execution instruction. The terminal can call the action resource corresponding to the first action execution instruction from an action resource library according to the first action execution instruction, and control the first virtual object to execute the first interaction action in the virtual scene interface. For example, the first action execution command is taken as a floating attack command, and the terminal calls a floating attack action resource from the action resource library according to the floating attack command to control the first virtual object to execute the floating attack action, as shown in fig. 5.
303. After receiving the first action execution instruction sent by the terminal, the server determines whether the first action execution instruction satisfies a first target condition, and if so, executes step 304.
The determination of the first target condition may be used to measure whether the target virtual object will perform a certain response action after being attacked by the first interactive action. The first target condition may include any one of the following (1) or (2):
(1) the first action execution instruction is a target action execution instruction. That is, the server defaults that the first action execution instruction will inevitably cause the attacked virtual object to execute the first response action, and therefore, the determining process may include: judging whether the first action execution instruction is a target execution instruction, wherein the first action execution instruction meets a first target condition when the first action execution instruction is the target execution instruction, and the first action execution instruction does not meet the first target condition when the first action execution instruction is not the target execution instruction.
For example, after the server receives any attack instruction, when the attack instruction is determined to be a floating attack instruction by the determination, step 304 may be directly performed.
(2) The target virtual object executes the first response action based on the first action execution instruction with an execution probability greater than the target probability. That is, the first action execution instruction does not necessarily trigger the target virtual object to execute the first response action, and therefore, the determining process may include: the server may determine an execution probability of a first response action of a target virtual object according to an execution condition of the first response action, an injury value of the first action execution instruction, and a first attribute of the target virtual object. Wherein, the execution condition of the first response action may refer to that the first attribute of the target virtual object is decreased to a target value.
For example, the target probability is 50%, when the life value of the target virtual object is reduced to less than half, the server takes the ratio of the injury value of the first action execution instruction to the current life value of the target virtual object as the execution probability of the first response action, and when the execution probability is greater than 50%, step 304 is executed.
It should be noted that, when the first action execution instruction does not satisfy the first target condition, the server may generate a third action execution instruction based on the first interactive action execution instruction, where the third action execution instruction includes an action identifier of a third response action. The third responsive action is an action different from the first responsive action. For example, the third responsive action may be a slam action, a defensive action, a fallback action, a knock down action, or the like.
304. When the first action execution instruction meets a first target condition, the server determines at least one second virtual object, and the at least one second virtual object is a virtual object controlled by at least one second user in the target group.
The at least one second virtual object is used to execute a second interactive action associated with the first response action, and may be a virtual object controlled by all second users in the target group, or a virtual object controlled by some second users in the target group, where the specific determination manner may be determined randomly or according to a certain rule, for example, the second virtual object is a virtual object having a target action attribute, and the target action attribute is an attribute capable of executing the second interactive action.
The process of the server determining the at least one second virtual object may be: and acquiring the action attribute of at least one virtual object controlled by the user in the target group, and taking the virtual object as a second virtual object when any virtual object has the target action attribute. For example, taking 5 users in the target group to control 5 virtual objects, and the target action attribute being the floating pursuit skill as an example, one of the virtual objects is the first virtual object, and 3 of the other 4 virtual objects have the floating pursuit skill, the server determines 3 second virtual objects.
In an alternative implementation, the second virtual object is a virtual object having a target action attribute and satisfying a second target condition. The second target condition is at least one characteristic, and the characteristic may be that the current anger value is greater than a first threshold, the execution probability of the second interactive action is greater than a second threshold, the current life value is greater than a third threshold, the time from the last execution of the second interactive action is greater than a fourth threshold, and the like.
When the second target condition is that the characteristic that the current anger value is greater than the first threshold value is met, the server may determine at least one second virtual object by: and when the action attribute of any virtual object comprises the target action attribute and the current anger value of the virtual object is greater than a first threshold value, taking the virtual object as the second virtual object. When the second target condition is that the characteristic that the execution probability of the second interactive action is greater than the second threshold is satisfied, the server may determine the at least one second virtual object by: and when the action attribute of any virtual object comprises the target action attribute and the execution probability of the second interactive action is greater than a second threshold value, taking the virtual object as the second virtual object. When the second target condition is that the current anger value is greater than the first threshold and the execution probability of the second interactive action is greater than the second threshold, the process of the server determining the at least one second virtual object may be: and when the action attribute of any virtual object comprises the target action attribute, the current anger value of the virtual object is greater than a first threshold value, and the execution probability of the second interactive action is greater than a second threshold value, the virtual object is taken as the second virtual object.
It should be noted that the server may further determine the number of the second virtual objects according to a third target condition, where the third target condition may be at least one of a life value of the target virtual object or a damage value that the second virtual object may cause, for example, taking the third target condition as the life value of the target virtual object as an example, when the life value of the target virtual object is very low, any one of the second virtual objects may clear the life value of the target virtual object, and at this time, the server may determine one second virtual object.
It should be further noted that, when the number of the second virtual objects determined by the server is not less than two, the server may further determine a target sequence of at least one second virtual object, where the target sequence is a sequence in which the at least one second virtual object executes the second interaction action. The server can randomly sort the determined at least one second virtual object to obtain the target sequence; the second virtual objects can also be sorted according to the characteristics of the second virtual objects, that is, the second virtual objects are sorted from large to small according to the current anger value of the second virtual objects, or the second virtual objects are sorted from large to small according to the execution probability of the second interaction action of the second virtual objects; the second virtual objects may also be ordered according to their determined order, which is not specifically limited by this disclosure. The server may store the identification of the sorted second virtual object in an execution order list. Of course, the server may also not determine the order of the at least one second virtual object, i.e. the second virtual object performs the second interactive action simultaneously.
305. The server determines a second interactive action performed by the at least one second virtual object.
And the server determines a second interactive action executed by each second virtual object according to the determined at least one second virtual object, wherein the second interactive action is the action associated with the first response action.
The second interactive action performed by the second virtual object may be the same or different. When a second interaction action executed by a second virtual object is the same, the server acquires an action identifier of the second interaction action; when the second interaction action executed by the second virtual object is different, the server may determine a second interaction action type of the second virtual object according to the target action attribute of the second virtual object, obtain an action identifier corresponding to the second interaction action according to the second interaction action type, and associate the virtual object with the action identifier. Wherein, the different types of the second virtual objects can have different target action attributes, and the types of the second virtual objects can be divided according to the occupation of the second virtual objects.
For example, the description will be given by taking as an example that the second interactive actions performed by the second virtual object are different, the second virtual object is three different heroes, which are respectively an object-attack hero, a legal-attack hero and a control hero, the corresponding target action attributes are respectively a floating pursuit/object-attack hero, a floating pursuit/legal-attack hero and a floating pursuit/control hero, and the three second virtual objects determined by the server are respectively two object-attack heros and one legal-attack hero, and the server acquires the action identifier corresponding to the floating pursuit/object-attack hero and the action identifier corresponding to the floating pursuit/legal-attack hero.
It should be noted that, the server may further associate, according to the target sequence, the action identifier of the original action of the second interaction action for the second virtual object except the second virtual object before the end in the target sequence, and associate the action identifier of the modified action of the second interaction action for the second virtual object arranged at the end in the target sequence. The deforming motion of the second interactive motion may be a terminal pursuit motion for hitting the target virtual object from the air to the ground. The deformation action of the second interaction action can enable the action connection of the virtual object in the fighting process to be smoother.
306. And the server generates a second action execution instruction and sends the second action execution instruction to the terminal.
The server generates a second action execution instruction according to steps 303 to 305, where the second action execution instruction may include an action identifier of the first response action, at least one second virtual object, a target sequence, an action identifier of a second interaction action of the at least one second virtual object, and the like, and returns the second action execution instruction to the terminal. Since the content included in the second action execution instruction is determined by the server according to the first action execution instruction, it can be considered that the server generates the second action execution instruction based on the first action execution instruction.
It should be noted that the content of step 306 in step 303 may be executed in a server or a terminal, and the embodiment of the present invention is described as an example executed in a server, for example, the server performs calculation according to the first action execution instruction and generates a second action execution instruction, so as to reduce the calculation amount of the terminal and reduce the requirement for the terminal, so that the network game corresponding to the virtual object control method may be executed in a terminal with a lower configuration. Of course, the calculation performed by the server may be performed by the terminal.
307. And the terminal receives the second action execution instruction, controls the target virtual object to execute a first response action of the first interaction action, and controls at least one second virtual object to execute a second interaction action associated with the first response action.
The terminal can receive a second action execution instruction returned by the server based on the first action execution instruction, and analyze the second action execution instruction to obtain an action identifier of the first response action, at least one second virtual object, a target sequence, a second interaction action identifier of the at least one second virtual object, and the like. The terminal acquires the action resource of the first response action from the action resource library according to the action identifier of the first response action, and controls the target virtual object to execute the first response action; and the terminal acquires the action resource of the second interaction action of each second virtual object from the action resource library according to the at least one second virtual object, the target sequence and the action identifier of the second interaction action of the at least one second virtual object and the target sequence, and controls each second virtual object to execute the second interaction action.
In an optional implementation manner, the terminal may obtain, from the action resource library, an action resource of an original edition action of the second interaction action for a second virtual object, of the at least one second virtual object, except for a second virtual object before a last in the target sequence, according to the target sequence, and obtain, from the action resource library, an action resource of a deformation action of the second interaction action for a second virtual object, of the at least one second virtual object, located at the last in the target sequence, control the second virtual object before the last to execute the original edition action of the second interaction action, and control the second virtual object at the last to execute the deformation action of the second interaction action. The terminal can obtain the action resource of the original edition action and the action resource of the deformation action according to the action identifier carried in the second action execution instruction, and can replace the action executed by the second virtual object positioned at the tail end of the target sequence with the deformation action of the second interaction action when the server does not relate to the action identifier of the original edition action and the action identifier of the deformation action.
For example, the terminal controls the target virtual object 1 to perform an action of being hit into the air, as shown in fig. 6. The target sequence is that the target sequence is arranged according to the sequence of the second virtual object B and the second virtual object C, the second virtual object B executes the floating pursuit action, and the second virtual object C executes the final pursuit action. As shown in fig. 7 and 8.
It should be noted that the target virtual object may be one virtual object or may be multiple virtual objects, and when the target virtual object is multiple virtual objects, the terminal may control the at least one second virtual object to execute the second interaction action associated with the first response action on any virtual object in the target virtual objects. After the terminal executes the deformation action of the second interaction action on the second virtual object at the tail of the control target sequence, the terminal can acquire the action resource of the response action of the deformation action of the second interaction action from the action resource library, and control the target virtual object to execute the second response action by taking the response action as the second response action. Wherein the second response action is only for distinguishing from the first response action and does not represent other meanings.
For example, the second response action may be a falling from the air to the ground and then rising, and after the terminal controls the second virtual object to perform the terminal pursuit action, the terminal controls the target virtual object to perform the falling from the air to the ground and then rising. As shown in fig. 9.
It should be noted that, the terminal may further receive the third action execution instruction, control the target virtual object to execute a third response action of the first interaction action according to the third action execution instruction, acquire an action resource of the third response action from the action resource library, and control the target virtual object to execute the third response action.
For example, after the target virtual object is subjected to the floating attack initiated by the first virtual object, the terminal controls the target virtual object to execute the defense action. As shown in fig. 10.
In the above steps, the terminal invokes an action resource in the action resource library, and a process of controlling the virtual object to execute the action may be shown in fig. 11, where fig. 11 is a diagram of a relationship between the action of the virtual object and the action resource provided according to the embodiment of the present invention, as shown in fig. 11, the left side is an action that the terminal controls the virtual object to execute, and the right side is an action resource acquired by the terminal.
In the embodiment of the invention, a virtual scene interface is displayed, a first virtual object controlled by a first user is controlled to execute a first interactive action according to a first action execution instruction of the first user, and the first action execution instruction is sent to a server; and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, and controlling at least one second virtual object to execute a second interaction action associated with the first response action, wherein the at least one second virtual object is a virtual object controlled by at least one second user in the target group. When any user controls a first virtual object controlled by the user to execute a first interactive action through a first action execution instruction, second virtual objects controlled by other users of the same team can perform a second interactive action controlled by the terminal, and the second interactive action is executed when the first interactive action is executed, so that the relation among virtual objects controlled by different users of the same team is increased, the virtual objects controlled by different users are correlated, the interaction among team members during the game of the team is reflected, and the playing method of the team game is enriched.
Fig. 12 is a block diagram of a virtual object control apparatus according to an embodiment of the present invention. The apparatus is used for executing the steps executed by the virtual object control method, and referring to fig. 12, the apparatus includes:
a display module 1201, configured to display a virtual scene interface, where the virtual scene interface includes a target virtual object and a virtual object controlled by at least one user in a target group;
the control module 1202 is configured to control, according to a first action execution instruction of a first user in the target group, a first virtual object controlled by the first user to execute a first interaction action, and send the first action execution instruction to the server;
the control module 1202 is configured to control the target virtual object to execute a first response action of the first interaction action according to a second action execution instruction returned by the server based on the first action execution instruction, and control the at least one second virtual object to execute a second interaction action associated with the first response action, where the at least one second virtual object is a virtual object controlled by at least one second user in the target group.
In one possible implementation, the second action execution instruction is generated on condition that the first action execution instruction satisfies the first target condition.
In another possible implementation manner, the at least one second virtual object is a virtual object with a target action attribute, and the target action attribute is an action attribute associated with the second interactive action.
In another possible implementation manner, the determining of the at least one second virtual object includes:
acquiring the action attribute of a virtual object controlled by at least one user in a target group;
and when the action attribute of any virtual object comprises the target action attribute, the virtual object is taken as a second virtual object.
In another possible implementation manner, the type of the second interactive action performed by each of the at least one second virtual object is different, and the type of the second interactive action corresponds to the target action attribute.
In another possible implementation, the at least one second virtual object further has at least one of the following characteristics:
the current anger value of the at least one second virtual object is greater than a first threshold;
the execution probability of the second interactive action of the at least one second virtual object is greater than a second threshold;
the current life value of at least one second virtual object is greater than a third threshold value;
the at least one second virtual object is greater than a fourth threshold from a time the second interactive action was last performed.
In another possible implementation, the target virtual object is a plurality of virtual objects; the control module 1202 is further configured to control the at least one second virtual object to perform a second interactive action associated with the first response action on any virtual object in the target virtual objects.
In another possible implementation manner, the control module 1202 is further configured to control the at least one second virtual object to simultaneously perform a second interactive action associated with the first response action.
In another possible implementation manner, the control module 1202 is further configured to control the at least one second virtual object to execute the second interaction action associated with the first response action according to the target precedence order.
In another possible implementation manner, the control module 1202 is further configured to control a second virtual object, except a second virtual object before the end of the target precedence order, in the at least one second virtual object to execute an original edition action of the second interaction action; and controlling a second virtual object positioned at the tail end of the target sequence in the at least one second virtual object to execute a deformation action of the second interaction action.
In another possible implementation manner, the determining process of the target sequence includes:
and sequencing according to the anger value of at least one second virtual object or the execution probability of the second interaction action from large to small to obtain the target sequence.
In the embodiment of the invention, a virtual scene interface is displayed, a first virtual object controlled by a first user is controlled to execute a first interactive action according to a first action execution instruction of the first user, and the first action execution instruction is sent to a server; and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, and controlling at least one second virtual object to execute a second interaction action associated with the first response action, wherein the at least one second virtual object is a virtual object controlled by at least one second user in the target group. When any user controls a first virtual object controlled by the user to execute a first interactive action through a first action execution instruction, second virtual objects controlled by other users of the same team can perform a second interactive action controlled by the terminal, and the second interactive action is executed when the first interactive action is executed, so that the relation among virtual objects controlled by different users of the same team is increased, the virtual objects controlled by different users are correlated, the interaction among team members during the game of the team is reflected, and the playing method of the team game is enriched.
It should be noted that: in the above embodiment, when the device runs an application program, only the division of the functional modules is described as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 13 shows a block diagram of a terminal 1300 according to an embodiment of the present invention. The terminal 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the methods provided by the method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying touch display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the touch display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the touch display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the touch display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the touch display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually becomes larger.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present invention further provides a storage medium, where the storage medium is applied to a terminal, and the storage medium stores at least one instruction, at least one section of program, a code set, or an instruction set, where the instruction, the program, the code set, or the instruction set is loaded and executed by a processor to implement the operations performed by the terminal in the method of the foregoing embodiment.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a storage medium, and the storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (13)

1. A virtual object control method, characterized in that the method comprises:
displaying a virtual scene interface, wherein the virtual scene interface comprises a target virtual object and virtual objects controlled by at least two users in a target group, and the at least two users acquire action permission in turn;
controlling a first virtual object controlled by a first user to execute a first interactive action according to a first action execution instruction of the first user in the target group, and sending the first action execution instruction to a server, wherein the first action execution instruction is triggered by the first user when the first user has action authority;
and according to a second action execution instruction returned by the server based on the first action execution instruction, controlling the target virtual object to execute a first response action of the first interaction action, controlling at least two second virtual objects to simultaneously execute second interaction actions associated with the first response action, or controlling the at least two second virtual objects to execute the second interaction actions associated with the first response action according to a target sequence, wherein the at least two second virtual objects are virtual objects controlled by at least one second user in the target group, and the second action execution instruction is returned by the server when the first user has an action right.
2. The method of claim 1, wherein the second action execution instruction is generated conditional on the first action execution instruction satisfying a first target condition.
3. The method of claim 1, wherein the at least two second virtual objects are virtual objects having a target action attribute, and wherein the target action attribute is an action attribute associated with the second interactive action.
4. The method according to claim 3, wherein the determining of the at least two second virtual objects comprises:
acquiring action attributes of virtual objects controlled by at least two users in the target group;
and when the action attribute of any virtual object comprises the target action attribute, taking the virtual object as the second virtual object.
5. The method of claim 3, wherein a type of the second interactive action performed by each of the at least two second virtual objects is different, and wherein the type of the second interactive action corresponds to the target action attribute.
6. The method of claim 3, wherein the at least two second virtual objects further have at least one of the following characteristics:
the current anger value of the at least two second virtual objects is greater than a first threshold;
the probability of execution of the second interactive action of the at least two second virtual objects is greater than a second threshold;
the current life values of the at least two second virtual objects are greater than a third threshold;
the at least two second virtual objects are more than a fourth threshold from the last time the second interactive action was performed.
7. The method of claim 1, wherein the target virtual object is a plurality of virtual objects;
the controlling of the at least two second virtual objects to simultaneously perform a second interactive action associated with the first responsive action includes:
controlling the at least two second virtual objects to simultaneously perform a second interactive action associated with the first response action on any one of the target virtual objects; or,
controlling the at least two second virtual objects to simultaneously perform a second interactive action associated with the first response action on a different virtual object of the target virtual objects.
8. The method of claim 1, wherein the target virtual object is a plurality of virtual objects;
the controlling of the at least two second virtual objects to simultaneously perform a second interactive action associated with the first responsive action includes:
controlling the at least two second virtual objects to simultaneously execute a second interaction action associated with the first response action on any virtual object in the target virtual objects according to the target sequence; or,
and controlling the at least two second virtual objects to execute second interaction actions related to the first response actions on different virtual objects in the target virtual objects according to the target sequence.
9. The method of claim 1, wherein the controlling the at least two second virtual objects to perform the second interactive action associated with the first response action in a target precedence order comprises:
controlling a second virtual object of the at least two second virtual objects except for a second virtual object before the end of the target sequence to execute an original edition action of the second interaction action;
and controlling a second virtual object positioned at the tail end of the target sequence in the at least two second virtual objects to execute the deformation action of the second interaction action.
10. The method of claim 1, wherein the determining the target precedence comprises:
and sequencing according to the anger values of the at least two second virtual objects or the execution probability of the second interaction action from large to small to obtain the target sequence.
11. An apparatus for controlling a virtual object, the apparatus comprising:
the display module is used for displaying a virtual scene interface, wherein the virtual scene interface comprises a target virtual object and virtual objects controlled by at least two users in a target group, and the at least two users acquire action permission in turn;
the control module is used for controlling a first virtual object controlled by a first user to execute a first interactive action according to a first action execution instruction of the first user in the target group, and sending the first action execution instruction to a server, wherein the first action execution instruction is triggered by the first user when the first user has action authority;
the control module is further configured to control the target virtual object to execute a first response action of the first interaction action according to a second action execution instruction returned by the server based on the first action execution instruction, control at least two second virtual objects to simultaneously execute a second interaction action associated with the first response action, or control the at least two second virtual objects to execute a second interaction action associated with the first response action according to a target sequence, where the at least two second virtual objects are virtual objects controlled by at least one second user in the target group, and the second action execution instruction is returned by the server when the first user has an action right.
12. A terminal, characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement the operations performed in the virtual object control method according to any one of claims 1 to 10.
13. A storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to carry out the operations performed in the virtual object control method according to any one of claims 1 to 10.
CN201910453477.8A 2019-05-28 2019-05-28 Virtual object control method, device, terminal and storage medium Active CN110141859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910453477.8A CN110141859B (en) 2019-05-28 2019-05-28 Virtual object control method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910453477.8A CN110141859B (en) 2019-05-28 2019-05-28 Virtual object control method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110141859A CN110141859A (en) 2019-08-20
CN110141859B true CN110141859B (en) 2022-02-01

Family

ID=67593610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910453477.8A Active CN110141859B (en) 2019-05-28 2019-05-28 Virtual object control method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110141859B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110585695B (en) * 2019-09-12 2020-09-29 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for using near-war property in virtual environment
CN111494955B (en) * 2020-04-20 2023-09-19 上海米哈游天命科技有限公司 Character interaction method, device, server and medium based on game
CN111672108A (en) * 2020-05-29 2020-09-18 腾讯科技(深圳)有限公司 Virtual object display method, device, terminal and storage medium
CN111686449A (en) * 2020-06-11 2020-09-22 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN111744185B (en) * 2020-07-29 2023-08-25 腾讯科技(深圳)有限公司 Virtual object control method, device, computer equipment and storage medium
CN112948240A (en) * 2021-02-04 2021-06-11 网易(杭州)网络有限公司 Game regression testing method, device, equipment and storage medium
CN113144617B (en) * 2021-05-13 2023-04-11 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual object and computer readable storage medium
CN113332711B (en) * 2021-06-30 2023-07-18 北京字跳网络技术有限公司 Role interaction method, terminal, equipment and storage medium
CN116983649A (en) * 2022-05-31 2023-11-03 腾讯科技(成都)有限公司 Virtual object control method, device, equipment and storage medium
CN115314749B (en) * 2022-06-15 2024-03-22 网易(杭州)网络有限公司 Response method and device of interaction information and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102065959A (en) * 2008-07-16 2011-05-18 科乐美数码娱乐株式会社 Game device, method for controlling game device, program, and information storage medium
CN105117579A (en) * 2015-07-21 2015-12-02 网易(杭州)网络有限公司 Object selection method and apparatus
JP2015223319A (en) * 2014-05-28 2015-12-14 株式会社カプコン Game program and game system
JP2017144226A (en) * 2016-11-10 2017-08-24 ガンホー・オンライン・エンターテイメント株式会社 Terminal device and server apparatus for providing game, and method for providing game
CN108888958A (en) * 2018-06-22 2018-11-27 深圳市腾讯网络信息技术有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100255894A1 (en) * 2009-04-01 2010-10-07 Chira Kidakarn Method for combining multiple actions in single video game

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102065959A (en) * 2008-07-16 2011-05-18 科乐美数码娱乐株式会社 Game device, method for controlling game device, program, and information storage medium
JP2015223319A (en) * 2014-05-28 2015-12-14 株式会社カプコン Game program and game system
CN105117579A (en) * 2015-07-21 2015-12-02 网易(杭州)网络有限公司 Object selection method and apparatus
JP2017144226A (en) * 2016-11-10 2017-08-24 ガンホー・オンライン・エンターテイメント株式会社 Terminal device and server apparatus for providing game, and method for providing game
CN108888958A (en) * 2018-06-22 2018-11-27 深圳市腾讯网络信息技术有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Also Published As

Publication number Publication date
CN110141859A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN111589142B (en) Virtual object control method, device, equipment and medium
CN111589128B (en) Operation control display method and device based on virtual scene
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN109091867A (en) Method of controlling operation thereof, device, equipment and storage medium
CN112704876A (en) Method, device and equipment for selecting virtual object interaction mode and storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium
CN113181647A (en) Information display method, device, terminal and storage medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN111752697A (en) Application program running method, device, equipment and readable storage medium
TWI817208B (en) Method and apparatus for determining selected target, computer device, non-transitory computer-readable storage medium, and computer program product
CN112274936B (en) Method, device, equipment and storage medium for supplementing sub-props of virtual props
CN111672115B (en) Virtual object control method and device, computer equipment and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111035929B (en) Elimination information feedback method, device, equipment and medium based on virtual environment
CN111672107B (en) Virtual scene display method and device, computer equipment and storage medium
CN111338487B (en) Feature switching method and device in virtual environment, terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant