CN111589129B - Virtual object control method, device, equipment and medium - Google Patents

Virtual object control method, device, equipment and medium Download PDF

Info

Publication number
CN111589129B
CN111589129B CN202010331198.7A CN202010331198A CN111589129B CN 111589129 B CN111589129 B CN 111589129B CN 202010331198 A CN202010331198 A CN 202010331198A CN 111589129 B CN111589129 B CN 111589129B
Authority
CN
China
Prior art keywords
virtual
virtual object
target
virtual objects
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010331198.7A
Other languages
Chinese (zh)
Other versions
CN111589129A (en
Inventor
万钰林
胡勋
粟山东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010331198.7A priority Critical patent/CN111589129B/en
Publication of CN111589129A publication Critical patent/CN111589129A/en
Application granted granted Critical
Publication of CN111589129B publication Critical patent/CN111589129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/218Input arrangements for video game devices characterised by their sensors, purposes or types using pressure sensors, e.g. generating a signal proportional to the pressure applied by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1056Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals involving pressure sensitive buttons
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a control method, a device, equipment and a medium of a virtual object, and belongs to the technical field of computers. According to the method, the skill applying step is carried out by responding to the trigger operation of the skill, the second virtual object in the target state is found out from the skill applying range and is taken as the target virtual object, the state of the second virtual object is considered, the second virtual object in the target state is screened out from at least two second virtual objects which can be taken as the target virtual objects, the determined target virtual object better accords with the user expectation, a certain second virtual object is prevented from being in the target state and is not the virtual object closest to the first virtual object or the virtual object with the minimum virtual life value, and the situation that the virtual object applied by the skill is different from the virtual object expected by the user occurs is caused.

Description

Virtual object control method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for controlling a virtual object.
Background
With the development of computer technology and the diversification of terminal functions, the variety of games that can be played on terminals is increasing. Among them, MOBA (Multiplayer Online Battle Arena, multiplayer online tactical competition) game is a popular game, and a terminal can display a virtual scene in an interface and a virtual object in the virtual scene, which can fight against other virtual objects by applying skills.
Currently, in a control method of a virtual object, when a trigger operation on a skill is detected, other virtual objects with minimum distances are selected as target virtual objects according to distances between other virtual objects and a current virtual object, and skill application is performed. Or selecting other virtual objects with the minimum virtual life values as target virtual objects to apply skills.
In the control method, only the distance or the virtual life value between the virtual objects is considered, a certain skill may be applied to other virtual objects, so that the other virtual objects have an attribute gain effect or an attribute subtraction effect, when the user wants to continue applying the skill to the virtual object, but the virtual object is not nearest to the current virtual object, or the virtual life value is minimum, so that the situation that the virtual object for applying the skill is different from the virtual object expected by the user occurs, the determined target virtual object is inaccurate, the control of the virtual object is inaccurate, and the display effect of the skill applying effect is poor.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment and a medium for a virtual object, which can improve the control precision of the virtual object and improve the display effect of skill applying effect. The technical scheme is as follows:
in one aspect, a method for controlling a virtual object is provided, including:
responding to triggering operation of skills, and acquiring the applicable range of the skills according to the position of a first virtual object;
determining a target virtual object in a target state from at least two second virtual objects in response to the at least two second virtual objects being included in the applicable range;
controlling the first virtual object to apply the skills to the target virtual object.
In one aspect, there is provided a control apparatus of a virtual object, the apparatus comprising:
the acquisition module is used for responding to the triggering operation of the skills and acquiring the applicable range of the skills according to the position of the first virtual object;
a determining module, configured to determine a target virtual object in a target state from at least two second virtual objects in response to the at least two second virtual objects being included in the applicable range;
And the control module is used for controlling the first virtual object to apply the skills to the target virtual object.
In one possible implementation, the determining module is configured to:
acquiring states of the at least two second virtual objects;
and determining the second virtual object as the target virtual object in response to the state of any second virtual object being the target state.
In one possible implementation manner, the determining module is configured to determine, in response to at least two second virtual objects included in the at least two second virtual objects being in a target state, a target virtual object from the at least two second virtual objects in the target state according to information of the at least two second virtual objects in the target state.
In one possible implementation, the determining module is configured to perform any one of the following:
determining a second virtual object with the smallest distance in the at least two second virtual objects as the target virtual object according to the distance between the first virtual object and the at least two second virtual objects in the target state;
determining a second virtual object with the minimum virtual life value in the at least two second virtual objects as a target virtual object according to the virtual life values of the at least two second virtual objects in the target state;
Determining a second virtual object with the smallest proportion in the at least two second virtual objects as the target virtual object according to the proportion of the virtual life value of the at least two second virtual objects in the target state to the total virtual life value;
determining a second virtual object with highest priority among the at least two second virtual objects as the target virtual object according to the priorities of the types of the at least two second virtual objects in the target state;
and according to at least two of the distance between the first virtual object and the at least two second virtual objects in the target state, the virtual life values of the at least two second virtual objects, the ratio of the virtual life values to the total virtual life values of the at least two second virtual objects, or the priority of the types of the at least two second virtual objects in the target state, the at least two second virtual objects are subjected to priority ranking, and the second virtual object with the highest priority in the ranking is determined to be the target virtual object.
In one possible implementation, the determining module is further configured to perform any one of the following:
And determining a target virtual object from the at least two second virtual objects according to at least one of a distance between the first virtual object and the at least two second virtual objects, virtual life values of the at least two second virtual objects, a ratio of virtual life values of the at least two second virtual objects to a total virtual life value, or priorities according to types of the at least two second virtual objects in response to none of the at least two second virtual objects included in the applicable range being in a target state.
In one possible implementation, the obtaining module is configured to obtain, with a position of the first virtual object as a center, a range with a radius being a target radius as the applicable range of the skill.
In one possible implementation, the obtaining module is configured to determine, in response to the second virtual object included in the applicable scope, the second virtual object in the applicable scope as the target virtual object.
In one possible implementation, the target state is a state with an attribute gain effect or an attribute subtraction effect.
In one possible implementation manner, the acquiring module is configured to:
Responding to the ending of the triggering operation of the skill, and acquiring the applicable range of the skill according to the position of the first virtual object when the triggering operation is ended;
the responding to the response that the applicable range comprises at least two second virtual objects, determining a target virtual object in a target state from the at least two second virtual objects, and the method comprises the following steps:
and responding to the fact that at least two second virtual objects are included in the applicable range, and executing the step of determining the target virtual object in the target state from the at least two second virtual objects according to the information of the at least second virtual objects when the triggering operation is finished.
In one possible implementation manner, the obtaining module is configured to respond to the ending of the triggering operation on the skill, and the operation position of the triggering operation is located in the target area when the triggering operation ends, and perform the step of obtaining the applicable range of the skill according to the position of the first virtual object.
In one possible implementation manner, the determining module is further configured to determine, as the target virtual object, a second virtual object in a corresponding direction and a corresponding distance with respect to the first virtual object according to a direction and a distance of the operation position relative to a center position of the target area, in response to ending of the trigger operation on the skill, and the operation position of the trigger operation being located outside the target area at the end of the trigger operation.
In one possible implementation, the apparatus further includes:
and the display module is used for responding to the continuous process of the trigger operation of the skills, executing the acquisition step of the applicable range and the determination step of the target virtual object, and highlighting the target virtual object.
In one aspect, an electronic device is provided that includes one or more processors and one or more memories having stored therein at least one piece of program code that is loaded and executed by the one or more processors to implement operations performed by a method of controlling a virtual object as in any of the possible implementations described above.
In one aspect, a storage medium is provided in which at least one piece of program code is stored, the at least one piece of program code being loaded and executed by a processor to implement operations performed by a method of controlling a virtual object as in any one of the possible implementations described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
according to the method, the second virtual object in the target state is screened out from at least two second virtual objects capable of being used as target virtual objects by responding to the trigger operation of the skill, the scope of the skill which can be applied is obtained, and the second virtual object in the target state is found out from the scope of the skill which can be applied to serve as the target virtual object to carry out the skill applying step.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a terminal interface according to an embodiment of the present application;
fig. 4 is a schematic diagram of a terminal interface according to an embodiment of the present application;
fig. 5 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a terminal interface according to an embodiment of the present application;
fig. 7 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a terminal interface according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an implementation environment of a control method for a virtual object according to an embodiment of the present application;
FIG. 13 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application;
FIG. 14 is a schematic diagram of information of a plurality of virtual objects according to an embodiment of the present application;
FIG. 15 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application;
FIG. 16 is a schematic illustration of two modes of application provided by an embodiment of the present application;
FIG. 17 is a schematic view of a trigger button of a target area according to an embodiment of the present application;
FIG. 18 is a schematic diagram of a targeting control provided by an embodiment of the present application;
FIG. 19 is a schematic diagram of configuration information of a filter according to an embodiment of the present application;
FIG. 20 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application;
FIG. 21 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present application;
fig. 23 is a schematic structural diagram of a terminal 2300 according to an embodiment of the application;
fig. 24 is a schematic structural diagram of a server 2400 according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution.
The term "at least one" in the present application means one or more, and "at least two" means two or more, for example, at least two node devices means two or more node devices.
Hereinafter, terms related to the present application will be explained.
Virtual scene: is a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene can be a simulation scene of a real scene, a half-simulation half-fictional three-dimensional scene, or a pure fictional three-dimensional scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene. The embodiment of the application does not limit the dimension of the virtual scene.
Optionally, the virtual scene is further used for a virtual scene fight between at least two virtual objects, in which virtual scene there are virtual resources available for the at least two virtual objects. Optionally, the virtual scene includes a symmetric lower left corner region and upper right corner region, and the virtual objects belonging to two hostile camps occupy one of the regions respectively, and take target buildings/points/bases/crystals deep in the opposite region as victory targets.
Optionally, a resource replenishment point may be included in the base of each region in the virtual scene, and the virtual object may return to the resource replenishment point to replenish the required resources, such as equipment used by the virtual object. In a specific possible embodiment, a virtual store may be provided in the resource replenishment point, from which virtual object virtual resources may be purchased using the virtual coin obtained in the current game.
Of course, the location where the virtual object supplements the resource may not be limited, and the virtual object may supplement the resource at any position in the virtual scene. In one possible implementation, a virtual store button may be provided in the user graphical interface, and when a trigger operation on the virtual store button is detected, the terminal may display a virtual resource in the virtual store, and equip the virtual object with a corresponding virtual resource according to a selected operation and a confirmation operation of the user.
In a specific possible embodiment, in order to simplify the user operation, the terminal may obtain and display the recommended resources of the virtual object according to the attribute of the virtual object, and when detecting the triggering operation on any recommended resource, equip the virtual object with a corresponding virtual resource. Alternatively, the recommended resource may be a resource that conforms to the attribute of the virtual object and has a value within the virtual coin owned by the virtual object.
Optionally, the virtual object may also quickly restore its own state in the resource replenishment point, such as quickly restoring a virtual life value or a legal value. Alternatively, if a virtual object is eliminated in the virtual scene, it may also be revived after a period of time in the resource replenishment point, which may also be referred to as a "birth point" in this implementation.
Virtual object: refers to movable objects in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and a cartoon character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual objects may be three-dimensional stereoscopic models, each having its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Alternatively, the virtual object is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the virtual object may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited in this embodiment of the application.
Alternatively, the virtual object may be a Player Character controlled by an operation on the client, an artificial intelligence (Artificial Intelligence, AI) set in the virtual scene fight by training, or a Non-Player Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character playing an athletic in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients joining the interaction.
Alternatively, the virtual objects in the virtual scene may be virtual objects of different figures, or may be virtual objects of the same figure. The skills of virtual objects of different visualizations may be different. Alternatively, the virtual objects may be of different types, the visualizations of the different types of virtual objects may be different, and the skills may be different. Wherein the avatar or type of the virtual object may be selected by the user prior to entering the virtual scene. For example, virtual objects may include five types of counsel, warrior, tank/assist, shooter, and stab. Among these, the skills of the chemist are generally the french skills, which may be different from one french skill to another. Warriors' skills are typically physically challenged by physical or virtual weapons, which may vary from warrior to warrior. The tank/auxiliary virtual life value/defensive ability may be larger than that of other types of virtual objects, and the tank/auxiliary skill may be a skill of controlling skill or restoring virtual life value or the like. The shooter is generally harmful to the enemy through remote attack, and the skill of the shooter is generally that the shot virtual prop has an attached effect or emits virtual props with different attributes, for example, the shot arrow has an attached deceleration effect, an attached poison effect, three arrows are shot simultaneously or one arrow with a shooting range longer than that of the common attack is shot, and the like. The skill of the stinger is generally a concealing skill, a stabbing skill, or the like, and can cause injury when an enemy is careless.
The multi-person online tactical competition refers to: in the virtual scene, different virtual teams belonging to at least two hostile camps occupy respective map areas respectively, and play games with a certain winning condition as a target. Such victory conditions include, but are not limited to: at least one of occupying a data point or destroying a hostile data point, clicking a virtual object of the hostile, guaranteeing the survival of the virtual object per se in a specified scene and time, seizing a certain resource, and comparing and exceeding the other party in a specified time. Tactical competition can be performed in units of offices, and maps of each tactical competition can be the same or different. Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5.
MOBA game: the virtual object game system is a game in which a plurality of points are provided in a virtual scene, and users in different camps control virtual objects to fight in the virtual scene, occupy the points or destroy hostile camping points. For example, a MOBA game may divide a user into two hostile camps, disperse user-controlled virtual objects in a virtual scene that compete with each other to destroy or preempt all points of the hostile as a winning condition. The MOBA game is in units of plays, and the duration of a play of the MOBA game is from the time when the game starts to the time when the winning condition is achieved.
Two exemplary MOBA games are described below.
A first typical MOBA game.
Fig. 1 is a two-dimensional map of a virtual scene of a typical MOBA game. In this typical MOBA game, virtual objects are divided into two camps of red and blue, each having five virtual objects, and ten virtual objects together play a MOBA game.
As shown in fig. 1, the virtual scene map is square and divided into two parts, namely Lan Fangji ground 1001 and red ground 1002, two camping bases (crystals) are respectively arranged at two ends of one diagonal line of the square; the three attack routes connecting Lan Fangji ground 1001 and red square ground 1002 are: an upper way 1003, a middle way 1004, a lower way 1005; public area: river 1006, and field 1007.
Two virtual objects of the camping are respectively born at the respective base positions, five virtual objects of the same camping respectively initiate attack to the enemy along the three attack directions, and the base of the opposite camping is destroyed, so that the winning of the game can be obtained. Wherein, blue square matrix is born at blue square base 1001, red square matrix is born at red square base 1002, and two virtual objects of camping are all located at the left lower angle of the viewing angle with my base to observe virtual scene. That is, the blue-side virtual object observes the virtual scene at a first view 1008, the red-side virtual object observes the virtual scene at a second view 1009, and the three directions of attack are, for each view, left-to-right, up-way, mid-way, and down-way, respectively. Illustratively, as shown in FIG. 2, is a virtual scene viewed at a first perspective 1008 of a blue-side virtual object, with 1001 at the lower left corner of the virtual scene screen at Lan Fangji; as shown in fig. 3, a virtual scene is viewed at a second perspective 1009 of the red-side virtual object, where the red-side base 1002 is located in the lower left corner of the virtual scene picture.
The visual angles of the two camps are set, no matter whether the virtual object controlled by the user belongs to the red square camping or the blue square camping, the base of the enemy camping is always in the upper right corner of the virtual scene picture, and the attack direction of the virtual object is always in the upper right direction of the virtual scene picture, so that the virtual object is controlled by the user. However, there is a problem in such an arrangement that, when the blue side virtual object and the red side virtual object are both located at the intersection of the blue side and red side (river channel), as shown in fig. 4, the User Interface seen by the blue side User on the terminal has a part of the virtual scene screen covered by the UI (User Interface) control 1010, but the dangerous river channel 1006 area (the red side virtual object, such as a spike, may suddenly initiate an attack from the river channel 1006) is not covered, so that the view of the blue side User is relatively wide. And the user interface seen by the red party on the terminal is shown in fig. 5, part of the virtual scene picture is blocked by the UI control 1010, and the dangerous river channel 1006 area is blocked by the UI control, which affects the field of view of the red party user, so that the red party user is inconvenient to observe the river channel 1006 area and is easy to be killed by the blue party spike.
Therefore, the drop 1005 is safer than the add 1003.
Five virtual objects of the same camp are typically five different types of virtual objects, and exemplary types of virtual objects may be:
warrior: the device has the advantages of more blood volume, higher defense force, higher attack force, short attack distance, more flexible movement, and a certain displacement skill, and can resist damage of enemies to a certain degree or cause damage to the enemies. Displacement skills are skills that can make a virtual object move faster, or to punch a distance in a certain direction, or to move instantaneously from one point to another.
The teacher: the blood volume is extremely low, the defending power is extremely low, the attack force is extremely high and is legal injury, the attack distance is long, the movement is inflexible, and the attack is easy to be killed, so that the attack is usually carried out on enemy under the protection of fighters or tanks/auxiliary.
Tank/assist: the blood volume is very large, the defensive power is very high, the attack force is very low, the attack distance is short, and the method is generally suitable for protecting teammates in front of teammates from injury and protecting other teammates.
Shooting hand: similar to the teacher, the difference is that the shooter has very high physical injuries, is suitable for sustainable output, and attacks the defensive tower and the base.
A stinger: the device has the advantages of low blood volume, low defending power, high attack force, short attack distance, very flexible movement, and usually has a plurality of displacement skills, is suitable for shooting assaults to enemy judges or shooters, and has the capability of instantaneously shooting and killing the enemy judges or the shooters.
Because different types of virtual objects have respective characteristics, and combine the advantages and disadvantages of the upper road and the lower road in view, the different types of virtual objects generally launch an attack to an enemy in a fixed attack direction. An attack is typically initiated by a shooter (and tank/assist) from a safer drop 1005 to an enemy; initiating an attack by a teacher from the mid-road 1004 to an adversary; an attack is initiated by a warrior with displacement advantage from the more dangerous upper road 1003 to the enemy; the stinger is mainly active in the field 1007 and opportunistically supports teammates who have an up-road 1004 or a down-road 1005.
In this way, the virtual object can fight against the enemy virtual object with different types, the warrior of the blue party fights against the red party, and the warrior of the blue party fights against the red party, so that the fairness of the game and the experience of the user are affected. Illustratively, as shown in FIG. 6, saint A1011 on the blue side attacks from the blue side's lower way 1005 toward the red side, saint A1012 on the blue side attacks from the blue side's upper way 1003 toward the red side, saint A1013 on the red side initiates an attack from the red side's lower way 1005 toward the blue side, saint A on the red side initiates an attack from the red side's upper way 1005 toward the blue side. I.e., saint Joule number 1011 fighter number 1014, saint Joule number 1012 fighter Saint Roint number 1013.
In order to make the game fairer, the more reasonable fight mode is shown in fig. 7, wherein the first shooter 1011 on the blue side fights the second shooter 1013 on the red side, and the first warrior 1012 on the blue side fights the second warrior 1014 on the red side. In order to realize such a fight scheme, it is necessary to solve the problem of how to make the blue side lower route and the red side lower route the same route, that is, to exchange the blue side upper route and the red side lower route, and to change the original lower route to the upper route and the original upper route to the lower route. Illustratively, the red side's add-drop path is changed to the position of add-drop path 1003 and drop-drop path 1005 as shown in fig. 7. The blue side lower path 1005 is also referred to as the red side lower path 1005, and the blue side upper path 1003 is also referred to as the red side upper path 1003.
It should be noted that, in the foregoing description, only one virtual scene is taken as an example, in one possible implementation manner, other virtual scene maps may be provided, for example, as shown in fig. 8, and the virtual scene map may be square, which includes two camping bases located on a diagonal line, namely, a Lan Fangji base 1001 and a red base 1002, and is similar to the virtual scene map shown in fig. 1, except that only one route connects the two bases: route 1015.
The second exemplary MOBA game achieves this more rational manner of play for the first exemplary MOBA game described above.
A second typical MOBA game.
The second typical MOBA game mode is the same as the first typical MOBA game in game play, is a square virtual scene, and the first campsite and the second campsite are located on the diagonal of the square, and are five virtual objects of each campsite to respectively attack an enemy along three attack directions. The difference is that the lower way of the first camp is also the lower way of the second camp at the same time, and the upper way of the first camp is also the upper way of the second camp at the same time. The second exemplary MOBA game implements this more rational approach to combat in the following manner.
There is first a first virtual scene and then a second virtual scene mirrored about the first virtual scene ground plane. As shown in fig. 9, there is a first virtual scene 1101, and a second virtual scene 1103, which is symmetrical to the first virtual scene 1101 about the ground plane 1102, i.e. the second virtual scene is a mirror reflection of the first virtual scene.
If the direction perpendicular to the ground plane of the first virtual scene and pointing to the sky is the y-axis positive half-axis direction 1104, the virtual scene seen by the user controlling the first virtual object is the first virtual scene observed in the space with the viewing angle located on the y-axis positive half-axis, as shown in fig. 9, and is the first virtual scene observed by the user controlling the first virtual object. The virtual scene seen by the user controlling the second camping virtual object is a second virtual scene observed in the space with the viewing angle located on the negative half axis of the y axis, as shown in fig. 10, and is a second virtual scene observed by the user controlling the second camping virtual object. It can be seen that the first virtual scene 1101 and the second virtual scene 1103 are opposite left and right, and the method can realize that the up and down roads of the second camp are exchanged, and the down roads seen by the user controlling the second camp virtual object are also the down roads seen by the user controlling the first camp virtual object.
But a second typical MOBA game respectively displays two virtual scenes which are mirror images to two camping users, wherein the first camping user observes a first virtual scene from the view angle of a positive half axis of a y axis and controls virtual objects to move in the first virtual scene; and the user of the second camping observes a second virtual scene from the view angle of the negative half axis of the y axis, and controls the virtual object to move in the second virtual scene. Because the first virtual scene and the second virtual scene are completely opposite, the server needs to set two sets of operation logic for the first virtual scene and the second virtual scene respectively, and the activity information of the virtual object of the first camp in the first virtual scene is calculated by using the first operation logic, for example: the movement position, skill applying direction and the like are used for calculating the activity information of the virtual object of the second camping in the second virtual scene by using second operation logic, and then the operation result of one virtual scene is required to be displayed on the other virtual scene, wherein the synchronization of the two virtual scenes is required to be realized by a large amount of calculation, the workload of a server and a terminal is overlarge, and the load of the server and the terminal is seriously increased.
The picture display method of the virtual scene provided by the application only needs to change the visual angle of observing the virtual scene by a user in one of two camps, does not need to have a mirror image of the second virtual scene, does not need to change the coordinates of the virtual scene model in a world coordinate system, does not need to change the original operation logic of the virtual scene, and can realize the display effect of switching the up and down roads with little calculation amount.
Fig. 12 is a schematic view of an implementation environment of a control method of a virtual object according to an embodiment of the present application, referring to fig. 12, where the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 installs and runs an application supporting a virtual scene. The application may be any one of a multiplayer online tactical game (Multiplayer Online Battle Arena games, MOBA), a virtual reality application, a two-or three-dimensional map program, and a simulation program. Of course, the application program may be other programs, such as a multi-player gun survival game, which is not limited by the embodiment of the present application. The first terminal 120 may be a terminal used by a first user that uses the first terminal 120 to operate a first virtual object located in a virtual scene to perform activities including, but not limited to: at least one of walking, running, adjusting body posture, general attack, and applying skills. Of course, the activity may also include other items, such as shooting, throwing, etc., as embodiments of the application are not specifically limited. Illustratively, the first virtual object is a first virtual character, such as an emulated persona or a cartoon persona. Illustratively, the first virtual object may be a first virtual animal, such as a simulated monkey or other animal, or the like.
The first terminal 120 and the second terminal 160 are connected to the server 140 through a wireless network or a wired network.
Server 140 may comprise at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 140 is used to provide background services for applications supporting virtual scenarios. Alternatively, the server 140 may undertake primary computing work and the first and second terminals 120, 160 may undertake secondary computing work; alternatively, the server 140 performs a secondary computing job, and the first terminal 120 and the second terminal 160 perform a primary computing job; alternatively, the server 140, the first terminal 120 and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The server 140 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms. The first terminal 120 and the second terminal 160 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The first terminal 120 and the second terminal 160 may be directly or indirectly connected to the server through wired or wireless communication, which is not limited in this embodiment of the present application.
Illustratively, the first terminal 120 and the second terminal 160 may transmit the generated data to the server 140, the server 140 may verify itself with the data generated by the terminal, and if the verification result with any terminal indicates that the data is inconsistent, the data generated by the server may be transmitted to any terminal, where any terminal is based on the data generated by the server.
In one possible implementation manner, the first terminal 120 and the second terminal 160 may determine a virtual scene of each frame according to a trigger operation of a user, and send the virtual scene to the server 140, or may send information of the trigger operation of the user to the server 140, and the server 140 may receive the information of the trigger operation and the virtual scene, determine the virtual scene according to the trigger operation, compare the virtual scene with the virtual scene uploaded by the terminal, and if the virtual scene is consistent, may continue subsequent calculation; if the virtual scenes are inconsistent, the virtual scenes determined by the virtual scenes can be sent to each terminal for synchronization. In a specific possible embodiment, the server 140 may further determine, according to the information of the triggering operation, a next frame of virtual scene of each terminal, and send the next frame of virtual scene to each terminal, so that each terminal performs a corresponding step to obtain a virtual scene consistent with the next frame of virtual scene determined by the server 140.
The second terminal 160 installs and runs an application supporting a virtual scene. The application may be any one of a multiplayer online tactical game (Multiplayer Online Battle Arena games, MOBA), a virtual reality application, a two-or three-dimensional map program, and a simulation program. Of course, the application program may be other programs, such as a multi-player gun survival game, which is not limited by the embodiment of the present application. The second terminal 160 may be a terminal used by a second user that uses the second terminal 160 to operate a second virtual object located in a virtual scene for activities including, but not limited to: at least one of walking, running, adjusting body posture, general attack, and applying skills. Of course, the activity may also include other items, such as shooting, throwing, etc., as embodiments of the application are not specifically limited. Illustratively, the second virtual object is a second virtual character, such as an emulated persona or a cartoon persona. Illustratively, the second virtual object may be a second virtual animal, such as a simulated monkey or other animal, or the like.
Alternatively, the first virtual object controlled by the first terminal 120 and the second virtual object controlled by the second terminal 160 are in the same virtual scene, where the first virtual object may interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be hostile, for example, the first virtual object and the second virtual object may belong to different teams, organizations, or camps, and interaction of the hostile relationship may be performed by applying skills at any position in the virtual scene.
In other embodiments, the second virtual object may also be in a teammate relationship with the first virtual object, for example, the first virtual character and the second virtual character may belong to the same team, the same organization, the same camp, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may refer broadly to one of a plurality of terminals, and the second terminal 160 may refer broadly to one of a plurality of terminals, the present embodiment being illustrated with only the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player, a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 160 may be smart phones, or other handheld portable gaming devices. The following embodiments are illustrated with the terminal comprising a smart phone.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the application does not limit the number of terminals and the equipment type.
Fig. 13 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application, and referring to fig. 13, the method may include the following steps.
1301. And the terminal responds to the triggering operation of the skill, and acquires the applicable range of the skill according to the position of the first virtual object.
Wherein the skill refers to the ability of a virtual object in a virtual scene. The skill may be an action skill or an attribute change skill, for example, a virtual object may have three skills, wherein one skill is an action skill of forward sprinting, another skill is an attribute gain (buff) skill for improving the movement speed of the user, and another skill is an attribute decrease (Debuff) skill for weakening the influence of nearby teammates.
The applicable range refers to a range in which the skill can be applied, and the skill cannot be applied to a position outside the applicable range. For example, some skills have an application distance from which an application range can be determined, and skills cannot be applied to positions beyond the application distance, i.e., to positions outside the application range.
In the embodiment of the present application, the virtual object controlled by the current terminal is referred to as a first virtual object, the virtual objects other than the first virtual object are referred to as second virtual objects, or the virtual objects different from the team to which the first virtual object belongs are referred to as second virtual objects, or the virtual objects same as the team to which the first virtual object belongs are referred to as second virtual objects, where the second virtual objects may be set by a related technician according to requirements, or determined according to the type of skill, and the embodiment of the present application is not limited herein.
In one possible implementation, the terminal may display a technical trigger button in the graphical user interface, and the triggering operation on the technical trigger button may be a triggering operation performed on the technical trigger button. The triggering operation of the pair of skills is used to trigger the application of the skills. The triggering operation may be a clicking operation, a long-pressing operation, a sliding operation or a dragging operation, and the operation mode of the triggering operation is not limited in the embodiment of the present application.
The terminal detects the triggering operation of the skill, and can determine the applicable range of the skill, so as to determine which second virtual object the target virtual object applied by the skill is according to the applicable range.
1302. The terminal determines a target virtual object in a target state from at least two second virtual objects in response to the at least two second virtual objects included in the applicable range.
The target state is a state in which the virtual object is located, and for example, the target state may be a state having an attribute gain effect or an attribute subtraction effect. It will be appreciated that it is more desirable for a user to be able to attack or make property changes to a virtual object in a targeted state when applying skills.
After step 1301, the terminal determines the scope of the skill that can be applied, and it will be appreciated that the skill is applied to a target (i.e., a target virtual object) that is a second virtual object within the scope of the scope. Therefore, the terminal can firstly determine which virtual objects in the virtual scene are in the applicable range, and after determining at least two second virtual objects in the applicable range, the terminal can determine the second virtual objects in the target state from the determined second virtual objects as target virtual objects and take the second virtual objects as application targets of skills.
The determination process of the target virtual object is to consider whether the virtual object is in the target state or not, rather than determining the virtual object which is closest to the first virtual object or has the smallest virtual life value as the target virtual object in the related art, the situation that the virtual object which is applied by the skill is different from the virtual object expected by the user due to the fact that a certain second virtual object is in the target state but not the virtual object which is closest to the first virtual object or has the smallest virtual life value can be avoided, so that the target virtual object determined by the method is more accurate, the control of the virtual object is more accurate, and the display effect of applying the skill is better.
1303. The terminal controls the first virtual object to apply the skill to the target virtual object.
The terminal determines a target virtual object, and can control the skill to be applied to the target virtual object. Specifically, the process of applying skills may also be: the terminal displays the effect of applying the skill to the target virtual object in the user graphical interface, for example, may display the effect of applying the skill between the first virtual object and the target virtual object and display the applied effect on the target virtual object.
For example, a specific example is provided, if a virtual object controlled by a current terminal is a first virtual object, a virtual object of hostile camping of the first virtual object is a second virtual object, when a user performs a trigger operation on skills, the second virtual object a is closest to the first virtual object, and the second virtual object B carries an attribute reducing effect of a deceleration effect, and a distance between the second virtual object B and the first virtual object is greater than a distance between the second virtual object a and the first virtual object. If the related art is adopted, the second virtual object a is determined as the target virtual object, but the second virtual object B has been decelerated, and the user actually wants to attack the second virtual object B, and thus the determined attack target is inaccurate. By adopting the method, the second virtual object B in the target state is determined as the target virtual object, so that the state of the virtual object is considered, and the problem that the attack target in the related technology does not accord with the user expectation is avoided. Therefore, the determined target virtual object is more accurate, the virtual object is more accurately controlled, and the display effect of the skill applying effect is better.
As another example, as shown in fig. 14, the first virtual object includes two second virtual objects within the applicable scope of the skill, wherein the virtual life value of the second virtual object a is sufficient, the virtual life value of the second virtual object is less, but the second virtual object a is in a weakened state. If the related technology is adopted, the second virtual object B is selected as the target virtual object to apply skills, and if the mode of the application is adopted, the second virtual object A is selected as the target virtual object, the second virtual object A is in a weak state, the moving speed is slower, and the user can want to chase the second virtual object A.
According to the method, the skill applying step is carried out by responding to the trigger operation of the skill, the second virtual object in the target state is found out from the skill applying range to serve as the target virtual object, the state of the second virtual object is considered, the second virtual object in the target state is screened out from at least two second virtual objects capable of serving as the target virtual object, the determined target virtual object better accords with the user expectation, a certain second virtual object can be prevented from being in the target state, but is not the virtual object closest to the first virtual object or the virtual object with the minimum virtual life value, and the situation that the virtual object applied by the skill is different from the virtual object expected by the user occurs is caused.
Fig. 15 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application, referring to fig. 15, the method may include:
1501. the terminal responds to the triggering operation of the skill, acquires the applicable range of the skill according to the position of the first virtual object, responds to the applicable range comprising at least two second virtual objects, and executes step 1502; in response to the inclusion of a second virtual object within the applicable scope, step 1505 is performed.
The applicable scope is a scope around the first virtual object, and the application position of the skill is within the applicable scope and cannot be applied outside the applicable scope.
In one possible implementation, the applicable scope may be a scope within a distance around the first virtual object. Specifically, the process of determining the applicable range by the terminal may be: the terminal takes the position of the first virtual object as a center, and obtains a range with the radius as a target radius as an applicable range of the skill.
The target radius may be set by a skilled person according to the requirement, that is, the target radius may be a preset value. The target radius may also be related to skills, e.g., different skills may be applied at different distances, and the target radius may be different. The embodiment of the present application is not particularly limited to this target radius.
In one particular possible embodiment, the target radius may be the application distance of the skill. Before the step 1501, the terminal may further obtain an application distance of the skill, and execute the process of obtaining the applicable range based on the target radius with the application distance being the target radius. Through the process of obtaining the applicable scope, which second virtual objects are in the applicable scope of the skill can be analyzed later to determine which second virtual objects meet the skill application condition.
In one possible implementation, after the terminal obtains the applicable range, the applicable range may be displayed, and the user may observe the applicable range of the skill, so as to assist the user in controlling the application of the skill by the first virtual object, and cause effective injury to the second virtual object.
For the triggering operation, the triggering operation may be at least one of a sliding operation, a dragging operation, a clicking operation or a pressing operation, and of course, the triggering operation may also be an operation of other operation modes, for example, the triggering operation may also be a gesture operation.
In one possible implementation, the trigger operation is different for skills that are applied differently. In the embodiment of the application, two ways of applying skills can be provided: active and rapid application. The triggering operations of the two modes of application may be different. The active application method refers to applying target information of skills determined through user operation, and the quick application method refers to applying target information of the skills determined automatically by a terminal.
In a specific possible embodiment, corresponding operation areas may be set for two application modes, and which application mode is determined according to the relationship between the operation position of the trigger operation and the operation area at the end of the trigger operation. Specifically, the operation region corresponding to the rapid application is referred to as a target region, and the active application is performed outside the target region.
In this embodiment, step 1501 may be performed during a quick application, and for an active application, the terminal may determine to apply the target information and then send it to the server for confirmation. Specifically, in one possible case, the terminal may respond to the ending of the triggering operation on the skill, and the operation position of the triggering operation is located in the target area when the triggering operation is ended, and the step of obtaining the applicable range of the skill according to the position of the first virtual object is performed. Through the quick application method, the application target is not required to be selected by user operation, so that the user operation is greatly simplified, the operation complexity is reduced, and a convenient operation mode is provided.
In another case, the terminal may end in response to a trigger operation on the skill, and an operation position of the trigger operation at the end of the trigger operation is located outside the target area, and determine, as the target virtual object, a second virtual object in a corresponding direction and a corresponding distance with respect to the first virtual object according to a direction and a distance of the operation position with respect to a center position of the target area. Through the initiative applying method, a user can freely select and apply targets, the targets can be selected more accurately, the skill of user operation is improved, the operation requirements of high-end players are met, and the user experience is improved.
Specifically, the terminal may determine the direction of the operation position relative to the center position as the application direction of the skill, obtain the distance between the operation position and the center position, determine the application distance corresponding to the distance as the application distance of the skill, and determine the target virtual object according to the application direction and the release distance.
The distance between the operating position and the center position may be referred to as an operating distance, and the operating distance and the applying distance may have a correspondence, and the terminal may convert the operating distance into the applying distance according to the correspondence. Of course, the correspondence may be linear, e.g., the operating distance is positively correlated with the application distance.
Of course, only one possible case is provided herein, and the terminal may determine the effect of applying the skill in other manners, for example, the applying distance of a certain skill is fixed, and the terminal may determine the direction of the operation position relative to the center position as the applying direction of the skill, so as to obtain the applying distance corresponding to the skill. The embodiment of the present application is not limited thereto.
In one possible implementation manner, after the terminal determines the target virtual object, the terminal can verify the determined result to the server, if the verification is successful, the subsequent skill applying step can be executed, if the verification is failed, the server can send the information of the target virtual object determined by the terminal according to the triggering operation to the terminal, and the terminal controls the skill applying according to the target virtual object determined by the server, so that the accuracy of the calculated result of each terminal is ensured, the consistency of the result of each terminal is ensured, the fairness of the competition is improved, and the situation that the terminal uses cheating software to obtain inaccurate second applied target information can be avoided, thereby influencing the fairness of the competition.
Through the two implementation modes, two flexible operation modes are provided, and a user can randomly select or switch one implementation mode to carry out skill application according to own use habit, so that own requirements are met, and accuracy of skill application is greatly improved.
For example, as shown in fig. 16, a target area 1601 and an active application area 1602 are provided, and the user may perform a trigger operation on the target area 1601, for example, in a manner of operating the trigger operation on the skill as a virtual stick, and may determine to perform the quick application if the target area 1601 is not moved out at the end of the trigger operation, thereby performing the above step 1501. If the trigger has ended moving out of the target area 1602, in this active application area 1602, the terminal may not perform step 1501, but rather determine to apply target information based on the trigger. In a specific example, as shown in fig. 17, a trigger button 1603 may be further provided at the center of the target area, and the user may activate the application operation of the skill through the trigger button 1603, thereby determining which application manner is based on the operation position at the end of the trigger operation.
For active application, an application target determination control 1604 may also be provided, as shown in FIG. 18, in order to achieve a more precise application effect. In one possible implementation, the application target determination control 1604 may overlap with the active application region 1602. I.e., 1604, is just above and coincident with the active application region 1602. Upon detecting a trigger operation to trigger control 1603, the terminal may display the apply target determination control 1604. Upon determining to be active, the user may determine candidate target information at the apply target determination control 1604. Specifically, the application target determining control 1604 may be referred to as a skill aiming wheel, where the center of the skill aiming wheel corresponds to the position of the first virtual object, the radius of the skill aiming wheel corresponds to the range of the skill, and the position of the finger activated by the skill wheel corresponds to the position of the skill aiming, so that the above determining process may be mapped by the angle and distance relative to the respective center of the circles, and converted into the direction and distance of the target virtual object relative to the first virtual object according to the ratio.
In one possible implementation, the terminal may also display a cancel release control in the user graphical interface for canceling release of the skill. Specifically, the terminal may respond to the ending of the triggering operation on the skill, and when the triggering operation is ended, the operation position of the triggering operation is located at the position where the cancel release control is located, cancel release of the skill, and the terminal may not execute the step 1501. By canceling the release control, a mode of canceling skill release is provided, skill release operation is enriched, more skill release functions are provided for users, and user experience is improved.
It should be noted that, in step 1501, after the terminal determines the applicable scope of the skill, the applicable scope may include one second virtual object, or may include at least two second virtual objects, or may not include one second virtual object. The terminal may determine the target virtual object in different ways for different situations. Step 1506 may be executed when the applicable scope includes one, that is, the second virtual object is the target virtual object. When at least two are included within the applicable scope, then a further determination is needed as to which second virtual object is to be the target virtual object, see in particular steps 1502 to 1505 below. When the second virtual object is not included in the applicable scope, the skill cannot select the target virtual object to be applied, and the terminal can control the first virtual object to apply the skill in any direction without executing the following steps, or without executing the applying step.
1502. The terminal obtains the states of the at least two second virtual objects, and responds to the at least two second virtual objects including the at least two second virtual objects in the target state, and step 1503 is executed; responsive to none of the at least two second virtual objects included within the appliable range being in a target state, performing step 1504; step 1506 is performed in response to one of the at least two second virtual objects included in the applicable scope being in the target state.
Wherein the target state is the state with the attribute gain effect or the attribute reduction effect. The terminal can acquire the states of at least two second virtual objects in the applicable range to judge whether the second virtual objects are in the target state, and if so, the second virtual objects can be used as target virtual objects. Specifically, the terminal may execute the step 1502, and in response to the state of any second virtual object being the target state, determine the second virtual object as the target virtual object.
After the terminal obtains the states of the at least two second virtual objects, the terminal can correspond to different processing modes due to the fact that the states of the at least two second virtual objects may have different conditions and correspond to different conditions. For example, if the at least two second virtual objects include at least two second virtual objects in the target state, it is further determined which second virtual object in the target state is to be the target virtual object, and in particular, see step 1503. For another example, if a second virtual object is included in the target state, step 1506 may be performed with the second virtual object as the target virtual object. For another example, if the second virtual object in the target state is not included, the terminal may determine, based on other rules, that the target virtual object performs skill application from the at least two second virtual objects, see step 1504 in particular.
In one possible implementation, the network architecture between the terminal and the server is different, the division of the power between the terminal and the server is different, and the steps performed are different. In one possible implementation, a frame synchronization architecture may be adopted between the terminal and the server, where the server bears the main computing work, the terminal may synchronize each frame to the server, the server calculates according to each frame uploaded by the terminal, determines a step to be executed by the terminal, and issues the step to the terminal for the terminal to execute.
In this implementation manner, the terminal may synchronize information of the first virtual object and the second virtual object in the virtual scene with the server and the user operation, the server determines the applicable range of the skill and the state of the second virtual object according to the information of the virtual object and the user operation synchronized by the terminal, determines the target virtual object, sends the information of the target virtual object to the terminal, and the terminal performs the step of applying the skill according to the target virtual object.
In another possible implementation manner, a CS (Client/Server) architecture may be adopted between the terminal and the Server, in which the terminal bears a main computing job, the Server acts as a cooperator, the terminal can determine and execute corresponding steps according to user operations, the terminal can send data generated by itself to the Server, the Server verifies the data, or the Server synchronizes data of other terminals for the terminal.
In the other implementation manner, the terminal can respond to the user operation, determine the applicable range of the skill and the state of the virtual object based on the information of each virtual object in the virtual scene, determine a target virtual object, send the target virtual object to the server, and after the verification is passed, the server sends the information of the target virtual object to the terminal; if the verification is not passed, the server can send the information of the target virtual object determined by the server to the terminal, and correct the target virtual object determined by the terminal.
The foregoing provides two architectures of frame synchronization and CS, and provides possible implementation manners under the two architectures, and the embodiment of the present application is not limited to the implementation manner under which architecture is specifically adopted.
1503. And the terminal determines the target virtual object from the at least two second virtual objects in the target state according to the information of the at least two second virtual objects in the target state.
After the terminal obtains the states of the second virtual objects, the terminal determines that at least two second virtual objects are in the target state, and the terminal can continuously analyze the at least two second virtual objects in the target state further and determine the target virtual objects based on other target determination rules.
The targeting rules may be set by the relevant technicians according to requirements, such as closest to the person, least virtual life value, etc., which are not limited by the embodiments of the present application. Several possible implementations of determining the target virtual object are provided below for this other rule difference.
In a first mode, the server determines a second virtual object with the smallest distance from the at least two second virtual objects as the target virtual object according to the distance between the first virtual object and the at least two second virtual objects in the target state.
In this manner, the selection of the target virtual object may be performed based on the distance from the first virtual object, that is, the target determination rule is to search for the virtual object closest to the first virtual object as the target virtual object. The target virtual object is closest to the first virtual object, that is, the first virtual object performs skill application by using the second virtual object closest to the surrounding as the target virtual object.
The meaning of the second virtual object may be different depending on the application effect of the different skills. For example, skill a is used to cause injury to enemy virtual objects. Skill B is used to cure own virtual objects. Skill C is used to apply on the target location, causing injury or attribute gain to the enemy or own virtual object. For skill a, the second virtual object may be the same virtual object as the team to which the first virtual object belongs, i.e., a teammate of the first virtual object. For skill B, the second virtual object is a different virtual object than the team to which the first virtual object belongs, i.e., is an adversary of the first virtual object. For skill C, the second virtual object may be a virtual object that is outside of the first virtual object, i.e., both teammates and enemies may be included.
And in a second mode, the server determines a second virtual object with the minimum virtual life value in the at least two second virtual objects as the target virtual object according to the virtual life values of the at least two second virtual objects in the target state.
In a third mode, the server determines, as the target virtual object, a second virtual object with the smallest ratio among the at least two second virtual objects according to the ratio of the virtual life values of the at least two second virtual objects in the target state to the total virtual life value.
In the second and third modes, the selection of the target virtual object may be performed based on the virtual life value of the second virtual object, that is, the target determination rule is to search for the virtual object with the smallest virtual life value or the smallest proportion of the virtual life values as the target virtual object.
For example, the skills a, B, and C are taken as examples, and the meaning of the second virtual object is the same as that in the first embodiment. For example, the second virtual object with the smallest virtual life value in the teammates is selected for healing, or the second virtual object with the smallest virtual life value in the enemy is selected for attacking, or the second virtual object with the smallest virtual life value in the teammates or the enemy is selected for skill application. For another example, a second virtual object with the smallest percentage of virtual life values in teammates is selected for healing, or a second virtual object with the smallest percentage of virtual life values in enemies is selected for attacking, or a second virtual object with the smallest percentage of virtual life values in teammates or enemies is selected for skill release.
In a fourth mode, the server determines, as the target virtual object, a second virtual object with the highest priority among the at least two second virtual objects according to priorities of types of the at least two second virtual objects in the target state.
In this fourth mode, virtual objects of no use type are associated with different priorities. The selection of the target virtual object can be realized based on the priority of the type of the second virtual object, and the priority setting of the type can be set by related technicians according to requirements, can be set by users according to own use habits, and can be updated based on information in the virtual scene. For example, the priority setting may be performed according to the type of the second virtual object, for example, the priority of the shooter is higher than the auxiliary priority.
In a fifth mode, the server ranks the at least two second virtual objects according to at least two of a distance between the first virtual object and the at least two second virtual objects in the target state, virtual life values of the at least two second virtual objects, a ratio of the virtual life values of the at least two second virtual objects to a total virtual life value, or priorities of types of the at least two second virtual objects in the target state, and determines a second virtual object with a highest priority in the ranks as the target virtual object.
In the fifth aspect, when the target virtual object is selected, a plurality of factors may be referred to prioritize at least two second virtual objects, so that the second virtual object with the highest priority is taken as the target virtual object. For example, the second virtual object having the smallest distance and the smallest virtual life value may be referred to as the target virtual object with reference to both the distance and the virtual life value. In particular, different weights may be set for different factors, for example, the distance is greater than the virtual life value. When the virtual life value of the second virtual object a is smaller but not much different from the virtual life value of the second virtual object B, the distance between the second virtual object a and the first virtual object is far smaller than the distance between the second virtual object B and the first virtual object, and at this time, the second virtual object a may be determined as the target virtual object.
1504. The terminal determines a target virtual object from the at least two second virtual objects according to at least one of a distance between the first virtual object and the at least two second virtual objects, virtual life values of the at least two second virtual objects, a ratio of virtual life values to total virtual life values of the at least two second virtual objects, or priorities according to types of the at least two second virtual objects.
At least two second virtual objects within the applicable scope are not in the target state, and the terminal may determine the target virtual object based on other target determination rules. The targeting rule may be the same as that described in step 1503, and will not be described in detail herein.
1505. The terminal determines the second virtual object in the target state as a target virtual object.
At least two second virtual objects within the applicable range are in a target state, and the terminal can directly determine the second virtual objects as target virtual objects.
In response to the step 1502 to step 1505 including at least two second virtual objects within the applicable range, determining a target virtual object in a target state from the at least two second virtual objects, where the second virtual object in the target state is screened out from the at least two second virtual objects capable of being used as the target virtual object in consideration of the states of the second virtual objects, where the determined target virtual object better conforms to the expectations of the user, and may avoid a virtual object that is not closest to the first virtual object or has a minimum virtual life value, which results in that the virtual object applied by the skill is different from the virtual object expected by the user.
In one possible implementation, the triggering operation may be continued for a certain time, and in the steps 1501 to 1505, the terminal determines that the information referenced by the target virtual object is the information at the end of the triggering operation, that is, the application time of the skill, instead of the information based on the starting time of the triggering operation. Specifically, in step 1501, the terminal may respond to the end of the trigger operation on the skill, and according to the position of the first virtual object at the end of the trigger operation, the terminal may acquire the applicable range of the skill, in step 1503, the terminal may respond to the fact that the applicable range includes at least two second virtual objects, and according to the information of the at least two second virtual objects at the end of the trigger operation, execute the step of determining the target virtual object in the target state from the at least two second virtual objects. The other steps are the same and are not described in detail herein. The information based on the determination of the target virtual object is the skill applying time, is close to the situation of the virtual scene at the time, meets the requirements of users, and determines the target virtual object more accurately.
In one possible implementation, during the duration of the triggering operation, the terminal may further perform the step of acquiring the applicable range and the step of determining the target virtual object in response to the duration of the triggering operation on the skill, and highlight the target virtual object. In the continuous process of the triggering operation, the target virtual object is determined in real time, the target virtual object is highlighted to inform the user of the second virtual object affected by the current operation, and if the second virtual object does not meet the user expectation, the user can continue to operate to adjust the target virtual object until the triggering operation is finished when the second virtual object meets the expectation. The process of determining the target virtual object by the terminal is the same as the above process, and will not be described in detail herein.
The terminal may include multiple display modes when displaying the target virtual object in the user graphical interface. In one display mode, the terminal may display that the target virtual object is in a selected state, for example, displaying a selection circle under the foot of the target virtual object to indicate that the target virtual object is selected. In another display manner, the terminal may also display the identification of the target virtual object on the target position of the user graphical interface, for example, displaying the avatar of the target virtual object in the upper right corner of the screen.
1506. The terminal determines a second virtual object within the appliable range as the target virtual object.
When the applicable scope includes a second virtual object, the skill cannot act on other second virtual objects except the second virtual object, and the terminal can directly determine the second virtual object as a target virtual object without judging whether the second virtual object is in a target state.
1507. The terminal controls the first virtual object to apply the skill to the target virtual object.
After the terminal determines the target virtual object, the terminal can control the first virtual object to apply the skill to the target virtual object. For example, it is determined that the target virtual object is the second virtual object a and the skill is to fire the fireball to the selected target. The application effect of the terminal display can be as follows: a fireball is launched for the second virtual object a.
In one possible implementation, the effect of applying the skill may be achieved by applying an animation of applying the skill, and in step 1509, the terminal may obtain the animation of applying the skill, and play the animation between the first virtual object and the target virtual object.
According to the method, the skill applying step is carried out by responding to the trigger operation of the skill, the second virtual object in the target state is found out from the skill applying range to serve as the target virtual object, the state of the second virtual object is considered, the second virtual object in the target state is screened out from at least two second virtual objects capable of serving as the target virtual object, the determined target virtual object better accords with the user expectation, a certain second virtual object can be prevented from being in the target state, but is not the virtual object closest to the first virtual object or the virtual object with the minimum virtual life value, and the situation that the virtual object applied by the skill is different from the virtual object expected by the user occurs is caused.
The above-described method flow is illustrated by a specific example. For the above target determination rule, for example, the configuration information of the Filter may be as shown in fig. 19, and may include rule contents that a plurality of Filter Group (Filter Group) filters are targets of a skill, such as a friend-to-friend neutral state of a battle, a monster type, a summons, a fruit, and the like, and besides these types, whether a certain gain (buff) query logic on a character or a query of a certain type of buff, that is, a query logic of the target state, is further added. For example, besides the camping class, the monster Type class, some state choices such as indiscriminate, fakeDead, etc., and also judgment of the Buff on the diagonal color body, such as the above Filter gain Type (Filter Buff Type) adds the Buff judgment of the hit class, float class, knock back class, and in the Filter gain state (Filter Buff Condition), the id (identity) of the Buff is set to be 10110312, which cannot be the Buff. With these arrangements, a number of detailed rules can be set for skill targets, in different situations, different targets being selected for striking based on the rules. For example, if we hit 1 skill first on target Q, we will hang a buff b on Q, at which time the player releases 2 skills again, and the player wants to hit Q, at which time the uniqueness of the skill target can be achieved by adding the buff b condition to the filter.
As shown in fig. 20, the above-mentioned process may be to press a skill button, trigger a quick application, start to perform a search (search for application targets), search whether there are legal targets (whether they are within an applicable scope) by the terminal, if not, continue searching for targets, if yes, determine whether to preferentially judge a special state ID (i.e., a target state), if yes, rank legal targets having a special state according to options set by the player, if not, rank legal targets according to options set by the player, and finally select a target with highest priority.
As shown in fig. 21, the implementation mechanism of frame synchronization is adopted in the competition, and the participants of the method comprise three layers: the system comprises a presentation layer, a logic layer and a server. Wherein the presentation layer and logic are both game clients (terminals), the server is composed of special hosts and is connected and communicated with each other through a network, wherein the server is not responsible for skills and presentation logic, but is responsible for message forwarding.
The flow is specifically as follows: the user presses a skill button, the presentation layer can set an aiming point as a self position (namely the position of the first virtual object), the skill ID is taken as a parameter to call a search interface, so that search is started (namely the target virtual object is determined), all surrounding roles are acquired by a skill range (namely an applicable range) and stored in a role list, the role list is traversed, the role list can be selected and ordered according to a skill rule, and the role with the highest priority in the role list, namely the first role, can be arbitrarily selected or arbitrarily combined and determined by referring to the three with the least blood volume (virtual life value), the nearest blood volume (virtual life value) and the least blood volume percentage (virtual life value). In the traversal process, judging whether each role is legal (namely whether the role is an enemy, whether the state is a hit class, and the like) according to the skill filter, and if so, judging the next role; if not, it is deleted from the list of roles.
Fig. 22 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present application, where the device includes:
an obtaining module 2201, configured to obtain an applicable scope of the skill according to the position of the first virtual object in response to a triggering operation on the skill;
a determining module 2202 configured to determine a target virtual object in a target state from at least two second virtual objects included in the applicable scope in response to the at least two second virtual objects;
a control module 2203 is configured to control the first virtual object to apply the skill to the target virtual object.
In one possible implementation, the determining module 2202 is configured to:
acquiring states of the at least two second virtual objects;
and determining the second virtual object as the target virtual object in response to the state of any second virtual object being the target state.
In one possible implementation, the determining module 2202 is configured to determine, in response to at least two second virtual objects included in the at least two second virtual objects being in a target state, a target virtual object from the at least two second virtual objects in the target state according to information of the at least two second virtual objects in the target state.
In one possible implementation, the determination module 2202 is configured to perform any one of:
determining a second virtual object with the smallest distance in the at least two second virtual objects as the target virtual object according to the distance between the first virtual object and the at least two second virtual objects in the target state;
determining a second virtual object with the minimum virtual life value in the at least two second virtual objects as the target virtual object according to the virtual life values of the at least two second virtual objects in the target state;
determining a second virtual object with the smallest proportion in the at least two second virtual objects as the target virtual object according to the proportion of the virtual life value of the at least two second virtual objects in the target state to the total virtual life value;
determining a second virtual object with highest priority among the at least two second virtual objects as the target virtual object according to the priorities of the types of the at least two second virtual objects in the target state;
and according to at least two of the distance between the first virtual object and the at least two second virtual objects in the target state, the virtual life values of the at least two second virtual objects, the ratio of the virtual life values to the total virtual life values of the at least two second virtual objects, or the priority of the type of the at least two second virtual objects in the target state, the at least two second virtual objects are subjected to priority ranking, and the second virtual object with the highest priority in the ranking is determined to be the target virtual object.
In one possible implementation, the determination module 2202 is further configured to perform any one of:
in response to none of the at least two second virtual objects included within the appliable range being in a target state, a target virtual object is determined from the at least two second virtual objects according to at least one of a distance between the first virtual object and the at least two second virtual objects, a virtual life value of the at least two second virtual objects, a ratio of a virtual life value of the at least two second virtual objects to a total virtual life value, or a priority according to a type of the at least two second virtual objects.
In one possible implementation, the obtaining module 2201 is configured to obtain, as the applicable range of the skill, a range having a radius that is the target radius centered on the location of the first virtual object.
In one possible implementation, the obtaining module 2201 is configured to determine, in response to the inclusion of a second virtual object within the applicable scope, the second virtual object within the applicable scope as the target virtual object.
In one possible implementation, the on-target state is one having an attribute gain effect or an attribute subtraction effect.
In one possible implementation, the acquiring module 2201 is configured to:
responding to the ending of the triggering operation of the skill, and acquiring the applicable range of the skill according to the position of the first virtual object when the triggering operation is ended;
the responding to the inclusion of at least two second virtual objects within the appliable range, determining a target virtual object in a target state from the at least two second virtual objects, comprising:
and responding to the fact that at least two second virtual objects are included in the applicable range, and executing the step of determining the target virtual object in the target state from the at least two second virtual objects according to the information of the at least second virtual objects at the end of the triggering operation.
In one possible implementation, the obtaining module 2201 is configured to, in response to ending the triggering operation on the skill, perform the step of obtaining the applicable scope of the skill according to the position of the first virtual object, where the operation position of the triggering operation is located in the target area at the end of the triggering operation.
In one possible implementation, the determining module 2202 is further configured to determine, as the target virtual object, a second virtual object in a corresponding direction and a corresponding distance with respect to the first virtual object according to a direction and a distance of the operation position with respect to a center position of the target area in response to ending of the trigger operation on the skill, and the operation position of the trigger operation being located outside the target area at the end of the trigger operation.
In one possible implementation, the apparatus further includes:
and the display module is used for responding to the continuous process of the trigger operation of the skills, executing the acquisition step of the applicable range and the determination step of the target virtual object, and highlighting the target virtual object.
According to the device provided by the embodiment of the application, the skill applying step is carried out by responding to the triggering operation of the skill, the second virtual object in the target state is found out from the skill applying range to serve as the target virtual object, the state of the second virtual object is considered, the second virtual object in the target state is screened out from at least two second virtual objects which can serve as the target virtual objects, the determined target virtual object is more in line with the user expectation, the situation that a certain second virtual object is in the target state but is not the virtual object closest to the first virtual object or has the minimum virtual life value is avoided, and the situation that the virtual object applied by the skill is different from the virtual object expected by the user occurs, so that the target virtual object determined by the method is more accurate, the control of the virtual object is more accurate, and the applied skill display effect is better.
It should be noted that: in the control device for a virtual object according to the above embodiment, only the division of the above functional modules is used for illustration when the virtual object is controlled, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the electronic device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the control device of the virtual object provided in the above embodiment and the control method embodiment of the virtual object belong to the same concept, and the specific implementation process of the control device of the virtual object is detailed in the control method embodiment of the virtual object, which is not described herein.
The above-described electronic device may be provided as a terminal shown in fig. 23 described below, or may be provided as a server shown in fig. 24 described below, which is not limited in the embodiment of the present application.
Fig. 23 is a schematic structural diagram of a terminal 2300 according to an embodiment of the present application, and the terminal 2300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 2300 may also be referred to by other names of user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 2300 includes: a processor 2301 and a memory 2302.
The processor 2301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 2301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 2301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required for display by the display screen. In some embodiments, the processor 2301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2302 may include one or more computer-readable storage media, which may be non-transitory. Memory 2302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2302 is used to store at least one instruction for execution by processor 2301 to implement method steps on a terminal side in a method of controlling a virtual object provided by various embodiments of the present application.
In some embodiments, the terminal 2300 may further optionally include: a peripheral interface 2303 and at least one peripheral. The processor 2301, memory 2302 and peripheral interface 2303 may be connected by a bus or signal line. Individual peripheral devices may be connected to peripheral device interface 2303 by buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2304, a touch display 2305, a camera assembly 2306, audio circuitry 2307 and a power supply 2309.
Peripheral interface 2303 may be used to connect at least one Input/Output (I/O) related peripheral to processor 2301 and memory 2302. In some embodiments, the processor 2301, memory 2302 and peripheral interface 2303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2301, the memory 2302 and the peripheral interface 2303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2304 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 2304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 2304 may also include NFC (Near Field Communication ) related circuits, which are not limiting of the application.
The display 2305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2305 is a touch display, the display 2305 also has the ability to collect touch signals at or above the surface of the display 2305. The touch signal may be input to the processor 2301 as a control signal for processing. At this point, the display 2305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 2305 may be one, providing a front panel of the terminal 2300; in other embodiments, the display 2305 may be at least two, each disposed on a different surface of the terminal 2300 or in a folded configuration; in still other embodiments, the display 2305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 2300. Even more, the display 2305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 2305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 2306 is used to capture images or video. Optionally, camera assembly 2306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 2306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 2307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2301 for processing, or inputting the electric signals to the radio frequency circuit 2304 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 2300, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2301 or the radio frequency circuit 2304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2307 may also include a headphone jack.
A power supply 2309 is used to provide power to the various components in the terminal 2300. The power source 2309 may be alternating current, direct current, disposable or rechargeable. When the power source 2309 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 2300 further includes one or more sensors 2310. The one or more sensors 2310 include, but are not limited to: an acceleration sensor 2311, a gyro sensor 2312, a pressure sensor 2313, an optical sensor 2315, and a proximity sensor 2316.
The acceleration sensor 2311 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 2300. For example, the acceleration sensor 2311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 2301 may control the touch display 2305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by the acceleration sensor 2311. The acceleration sensor 2311 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 2312 may detect a body direction and a rotation angle of the terminal 2300, and the gyro sensor 2312 may collect a 3D motion of the user to the terminal 2300 in cooperation with the acceleration sensor 2311. The processor 2301 may perform the following functions based on the data collected by the gyro sensor 2312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 2313 may be disposed at a side frame of the terminal 2300 and/or at a lower layer of the touch screen 2305. When the pressure sensor 2313 is provided at a side frame of the terminal 2300, a grip signal of the terminal 2300 by a user can be detected, and the processor 2301 performs a right-left hand recognition or a quick operation according to the grip signal collected by the pressure sensor 2313. When the pressure sensor 2313 is disposed at the lower layer of the touch screen 2305, the processor 2301 controls the operability control on the UI interface according to the pressure operation of the user on the touch screen 2305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 2315 is used to collect ambient light intensity. In one embodiment, the processor 2301 may control the display brightness of the touch display 2305 based on the ambient light intensity collected by the optical sensor 2315. Specifically, when the ambient light intensity is high, the display luminance of the touch display screen 2305 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 2305 is turned down. In another embodiment, the processor 2301 may also dynamically adjust the photographing parameters of the camera assembly 2306 based on the intensity of ambient light collected by the optical sensor 2315.
A proximity sensor 2316, also referred to as a distance sensor, is typically provided at the front panel of the terminal 2300. The proximity sensor 2316 is used to collect a distance between a user and the front surface of the terminal 2300. In one embodiment, when the proximity sensor 2316 detects that the distance between the user and the front surface of the terminal 2300 gradually decreases, the processor 2301 controls the touch display 2305 to switch from the bright screen state to the off screen state; when the proximity sensor 2316 detects that the distance between the user and the front surface of the terminal 2300 becomes gradually larger, the processor 2301 controls the touch display 2305 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 23 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 24 is a schematic structural diagram of a server 2400 according to an embodiment of the present application, where the server 2400 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 2401 and one or more memories 2402, where at least one program code is stored in the memories 2402, and the at least one program code is loaded and executed by the processor 2401 to implement the method steps on the server side in the virtual object control method provided in the foregoing embodiments. Of course, the server 2400 may also have a wired or wireless network interface, a keyboard, an input/output interface, etc. to perform input/output, and the server 2400 may also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, for example a memory comprising at least one program code executable by a processor in an electronic device to perform the method of controlling a virtual object in the above embodiment, is also provided. For example, the computer readable storage medium may be a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the present application.

Claims (14)

1. A control method of a virtual object, characterized in that an active way of applying skills to a virtual object and a quick way of applying skills are provided, wherein the active way of applying skills refers to determining an application target of the skills through user operation, and the quick way of applying skills refers to automatically determining the application target of the skills by a terminal, the method comprising:
Providing a target area corresponding to the rapid application mode and an active application area corresponding to the active application mode in a virtual scene, wherein the active application area is positioned outside the target area, and a trigger button is arranged at the center of the target area;
when the trigger operation of the skill is detected to be activated through the trigger button, displaying an application target determining control, wherein the application target determining control is overlapped with the active application area;
responding to the ending of the triggering operation of the skill, determining that the operation position of the triggering operation is positioned in the target area when the triggering operation is ended as the rapid execution mode, and acquiring the applicable range of the skill according to the position of the first virtual object; determining whether to prioritize the target state in response to the inclusion of a second virtual object within the appliable range; responding to a priority judgment target state, wherein the applicable range comprises at least two second virtual objects, and the states of the at least two second virtual objects are obtained; the target state is a selection rule of a target of skills set by a user through a filter, and the target state is a state attached to other skills than the skills corresponding to the trigger button among the skills possessed by the first virtual object;
Determining the second virtual object as a target virtual object under the condition that only one second virtual object is in a target state, wherein the target state is provided with an attribute gain effect or an attribute reduction effect, and determining a target virtual object from at least two second virtual objects in the target state according to information of the at least two second virtual objects in the target state under the condition that the at least two second virtual objects are in the target state;
responding to the ending of the triggering operation of the skill, determining the active application mode as the operation position of the triggering operation is positioned in the active application area when the triggering operation is ended, and determining the direction and the distance of the operation position relative to the central position of the target area based on the application target determination control;
determining a second virtual object in a corresponding direction and a corresponding distance relative to the first virtual object as a target virtual object according to the direction and the distance of the operation position relative to the central position of the target area;
controlling the first virtual object to apply the skill to the target virtual object;
Wherein determining the second virtual object as a target virtual object when only one second virtual object is in a target state, where the target state is an attribute gain effect or an attribute subtraction effect, determining a target virtual object from at least two second virtual objects in the target state according to information of the at least two second virtual objects in the target state when the at least two second virtual objects are in the target state, includes:
storing the second virtual objects contained in the applicable range into a role list, traversing the role list, and sequencing the second virtual objects in the role list according to the information of the second virtual objects; judging whether the current second virtual object is in the target state or not in the process of traversing the role list, and deleting the current second virtual object from the role list if the current second virtual object is not in the target state; and determining the first role in the traversed role list as the target virtual object.
2. The method according to claim 1, wherein the determining a target virtual object from the at least two second virtual objects in the target state according to the information of the at least two second virtual objects in the target state includes any one of:
Determining a second virtual object with the smallest distance in the at least two second virtual objects as the target virtual object according to the distance between the first virtual object and the at least two second virtual objects in the target state;
determining a second virtual object with the minimum virtual life value in the at least two second virtual objects as a target virtual object according to the virtual life values of the at least two second virtual objects in the target state;
determining a second virtual object with the smallest proportion in the at least two second virtual objects as the target virtual object according to the proportion of the virtual life value of the at least two second virtual objects in the target state to the total virtual life value;
determining a second virtual object with highest priority among the at least two second virtual objects as the target virtual object according to the priorities of the types of the at least two second virtual objects in the target state;
and according to at least two of the distance between the first virtual object and the at least two second virtual objects in the target state, the virtual life values of the at least two second virtual objects, the ratio of the virtual life values to the total virtual life values of the at least two second virtual objects, or the priority of the types of the at least two second virtual objects in the target state, the at least two second virtual objects are subjected to priority ranking, and the second virtual object with the highest priority in the ranking is determined to be the target virtual object.
3. The method of claim 1, further comprising any one of:
and determining a target virtual object from the at least two second virtual objects according to at least one of a distance between the first virtual object and the at least two second virtual objects, virtual life values of the at least two second virtual objects, a ratio of virtual life values of the at least two second virtual objects to a total virtual life value, or priorities according to types of the at least two second virtual objects in response to none of the at least two second virtual objects included in the applicable range being in a target state.
4. The method of claim 1, wherein the obtaining the applicable scope of the skill based on the location of the first virtual object comprises:
and taking the position of the first virtual object as the center, and acquiring a range with the radius being the target radius as the applicable range of the skill.
5. The method according to claim 1, wherein the method further comprises:
in response to the inclusion of a second virtual object within the appliable range, determining the second virtual object within the appliable range as the target virtual object.
6. The method according to claim 1, wherein the method further comprises:
and in response to the step of acquiring the applicable range and the step of determining the target virtual object in the duration of the trigger operation of the skill, highlighting the target virtual object.
7. A control device for a virtual object, wherein an active way of applying skills to a virtual object is provided, the active way of applying skills being determined by user operation, and a quick way of applying skills being determined automatically by a terminal, the device comprising:
the display module is used for providing a target area corresponding to the rapid application mode and an active application area corresponding to the active application mode in a virtual scene, wherein the active application area is positioned outside the target area, and a trigger button is arranged at the center of the target area; when the trigger operation of the skill is detected to be activated through the trigger button, displaying an application target determining control, wherein the application target determining control is overlapped with the active application area;
the acquisition module is used for responding to the ending of the triggering operation on the skills, determining the triggering operation as the rapid implementation mode when the operation position of the triggering operation is positioned in the target area at the ending of the triggering operation, and acquiring the applicable range of the skills according to the position of the first virtual object;
The determining module is used for determining whether to judge the target state preferentially or not in response to the fact that the second virtual object is included in the applicable range; responding to a priority judgment target state, wherein the applicable range comprises at least two second virtual objects, and the states of the at least two second virtual objects are obtained; determining the second virtual object as a target virtual object under the condition that only one second virtual object is in a target state, wherein the target state is provided with an attribute gain effect or an attribute reduction effect, and determining a target virtual object from at least two second virtual objects in the target state according to information of the at least two second virtual objects in the target state under the condition that the at least two second virtual objects are in the target state; the target state is a selection rule of a target of skills set by a user through a filter, and the target state is a state attached to other skills than the skills corresponding to the trigger button among the skills possessed by the first virtual object;
the acquisition module is further used for responding to the ending of the triggering operation of the skill, determining the triggering operation as the active application mode when the operation position of the triggering operation is positioned in the active application area when the triggering operation is ended, and determining the direction and the distance of the operation position relative to the central position of the target area based on the application target determination control;
The determining module is further configured to determine, as a target virtual object, a second virtual object in a corresponding direction and a corresponding distance with respect to the first virtual object according to a direction and a distance of the operation position with respect to a center position of the target area;
a control module for controlling the first virtual object to apply the skill to the target virtual object;
the determining module is used for storing the second virtual objects contained in the applicable range into a role list, traversing the role list and sequencing the second virtual objects in the role list according to the information of the second virtual objects; judging whether the current second virtual object is in the target state or not in the process of traversing the role list, and deleting the current second virtual object from the role list if the current second virtual object is not in the target state; and determining the first role in the traversed role list as the target virtual object.
8. The apparatus of claim 7, wherein the determining module is configured to perform any one of:
determining a second virtual object with the smallest distance in the at least two second virtual objects as the target virtual object according to the distance between the first virtual object and the at least two second virtual objects in the target state;
Determining a second virtual object with the minimum virtual life value in the at least two second virtual objects as a target virtual object according to the virtual life values of the at least two second virtual objects in the target state;
determining a second virtual object with the smallest proportion in the at least two second virtual objects as the target virtual object according to the proportion of the virtual life value of the at least two second virtual objects in the target state to the total virtual life value;
determining a second virtual object with highest priority among the at least two second virtual objects as the target virtual object according to the priorities of the types of the at least two second virtual objects in the target state;
and according to at least two of the distance between the first virtual object and the at least two second virtual objects in the target state, the virtual life values of the at least two second virtual objects, the ratio of the virtual life values to the total virtual life values of the at least two second virtual objects, or the priority of the types of the at least two second virtual objects in the target state, the at least two second virtual objects are subjected to priority ranking, and the second virtual object with the highest priority in the ranking is determined to be the target virtual object.
9. The apparatus of claim 7, wherein the determining module is further configured to perform any one of:
and determining a target virtual object from the at least two second virtual objects according to at least one of a distance between the first virtual object and the at least two second virtual objects, virtual life values of the at least two second virtual objects, a ratio of virtual life values of the at least two second virtual objects to a total virtual life value, or priorities according to types of the at least two second virtual objects in response to none of the at least two second virtual objects included in the applicable range being in a target state.
10. The apparatus of claim 7, wherein the means for obtaining is configured to obtain a range of radii to a target radius as the applicable range of skills, centered on the location of the first virtual object.
11. The apparatus of claim 7, wherein the acquisition module is configured to determine, in response to the inclusion of a second virtual object within the appliable range, the second virtual object within the appliable range as the target virtual object.
12. The apparatus of claim 7, wherein the display module is further configured to:
and in response to the step of acquiring the applicable range and the step of determining the target virtual object in the duration of the trigger operation of the skill, highlighting the target virtual object.
13. An electronic device comprising one or more processors and one or more memories, the one or more memories having stored therein at least one piece of program code loaded and executed by the one or more processors to implement the method of controlling a virtual object as claimed in any of claims 1 to 6.
14. A storage medium having stored therein at least one program code loaded and executed by a processor to implement the method of controlling a virtual object according to any one of claims 1 to 6.
CN202010331198.7A 2020-04-24 2020-04-24 Virtual object control method, device, equipment and medium Active CN111589129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010331198.7A CN111589129B (en) 2020-04-24 2020-04-24 Virtual object control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010331198.7A CN111589129B (en) 2020-04-24 2020-04-24 Virtual object control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111589129A CN111589129A (en) 2020-08-28
CN111589129B true CN111589129B (en) 2023-08-15

Family

ID=72185148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010331198.7A Active CN111589129B (en) 2020-04-24 2020-04-24 Virtual object control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111589129B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114100131A (en) * 2021-11-12 2022-03-01 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal, storage medium and program product
CN115671734B (en) * 2023-01-04 2023-05-02 赤子城网络技术(北京)有限公司 Virtual object control method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105194873A (en) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 Information-processing method, terminal and computer storage medium
CN107398071A (en) * 2017-07-19 2017-11-28 网易(杭州)网络有限公司 Mission Objective system of selection and device
CN107450812A (en) * 2017-06-26 2017-12-08 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN108970116A (en) * 2018-07-19 2018-12-11 腾讯科技(深圳)有限公司 Virtual role control method and device
CN110215709A (en) * 2019-06-04 2019-09-10 网易(杭州)网络有限公司 Object select method, apparatus, storage medium and electronic equipment
CN110694261A (en) * 2019-10-21 2020-01-17 腾讯科技(深圳)有限公司 Method, terminal and storage medium for controlling virtual object to attack

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105214309B (en) * 2015-10-10 2017-07-11 腾讯科技(深圳)有限公司 A kind of information processing method, terminal and computer-readable storage medium
CN107754309B (en) * 2017-09-30 2019-03-08 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105194873A (en) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 Information-processing method, terminal and computer storage medium
CN107450812A (en) * 2017-06-26 2017-12-08 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN107398071A (en) * 2017-07-19 2017-11-28 网易(杭州)网络有限公司 Mission Objective system of selection and device
CN108970116A (en) * 2018-07-19 2018-12-11 腾讯科技(深圳)有限公司 Virtual role control method and device
CN110215709A (en) * 2019-06-04 2019-09-10 网易(杭州)网络有限公司 Object select method, apparatus, storage medium and electronic equipment
CN110694261A (en) * 2019-10-21 2020-01-17 腾讯科技(深圳)有限公司 Method, terminal and storage medium for controlling virtual object to attack

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于信息老化的协同过滤推荐算法;王玉斌 等;电子与信息学报;全文 *

Also Published As

Publication number Publication date
CN111589129A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
WO2021184806A1 (en) Interactive prop display method and apparatus, and terminal and storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111589140B (en) Virtual object control method, device, terminal and storage medium
CN112076467B (en) Method, device, terminal and medium for controlling virtual object to use virtual prop
CN111462307A (en) Virtual image display method, device, equipment and storage medium of virtual object
CN111589133A (en) Virtual object control method, device, equipment and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111494937B (en) Virtual object control method, virtual object information synchronization device, virtual object information synchronization equipment and virtual object information synchronization medium
CN111672114B (en) Target virtual object determination method, device, terminal and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111921190B (en) Prop equipment method, device, terminal and storage medium for virtual object
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN113101656B (en) Virtual object control method, device, terminal and storage medium
CN113680060B (en) Virtual picture display method, apparatus, device, medium and computer program product
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112402961A (en) Interactive information display method and device, electronic equipment and storage medium
WO2021143253A1 (en) Method and apparatus for operating virtual prop in virtual environment, device, and readable medium
CN111672108A (en) Virtual object display method, device, terminal and storage medium
CN111589129B (en) Virtual object control method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40029134

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant