WO2021213026A1 - 虚拟对象的控制方法、装置、设备及存储介质 - Google Patents

虚拟对象的控制方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021213026A1
WO2021213026A1 PCT/CN2021/079592 CN2021079592W WO2021213026A1 WO 2021213026 A1 WO2021213026 A1 WO 2021213026A1 CN 2021079592 W CN2021079592 W CN 2021079592W WO 2021213026 A1 WO2021213026 A1 WO 2021213026A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
target
virtual
control
trigger signal
Prior art date
Application number
PCT/CN2021/079592
Other languages
English (en)
French (fr)
Inventor
胡勋
翁建苗
万钰林
粟山东
张勇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to AU2021240132A priority Critical patent/AU2021240132A1/en
Priority to KR1020217035074A priority patent/KR20210143300A/ko
Priority to EP21778333.1A priority patent/EP3936207A4/en
Priority to CA3133467A priority patent/CA3133467A1/en
Priority to SG11202111219TA priority patent/SG11202111219TA/en
Priority to JP2021566600A priority patent/JP7476235B2/ja
Publication of WO2021213026A1 publication Critical patent/WO2021213026A1/zh
Priority to US17/530,382 priority patent/US20220072428A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/218Input arrangements for video game devices characterised by their sensors, purposes or types using pressure sensors, e.g. generating a signal proportional to the pressure applied by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1056Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals involving pressure sensitive buttons
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the embodiments of the present application relate to the field of computer technology, and in particular, to a method, device, device, and storage medium for controlling virtual objects.
  • the user quickly releases the skill to the virtual object by clicking the skill control.
  • the virtual object is the default virtual object determined by the server according to the client data, or the user can actively select the virtual object and release the skill by dragging the skill control, and the skill is released.
  • the presentation layer of the application will specially mark the virtual objects of the released skills and display them in the virtual environment screen.
  • the user can only determine whether the released object is the expected virtual object during the skill release process, and cannot know the skill corresponding to the skill before releasing the skill. Virtual objects may cause the wrong target to be released by the skill, resulting in a waste of skill resources.
  • the embodiments of the present application provide a method, device, device, and storage medium for controlling virtual objects, which can enable users to know the target virtual objects that the operations act on by marking when they are not performing operations, thereby improving the visibility of virtual objects. Control efficiency and control accuracy.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for controlling a virtual object.
  • the method is applied to a terminal, and the method includes:
  • a game interface is displayed, the game interface includes a first virtual object, at least one second virtual object, and a first control, the first virtual object and the second virtual object are located in the virtual world, and the first virtual object A virtual object and the second virtual object belong to different camps, and the first control is used to control the first virtual object to use virtual props to change target attribute values of other virtual objects;
  • an embodiment of the present application provides a virtual object control device, the device includes:
  • the display module is configured to display a game interface, the game interface includes a first virtual object, at least one second virtual object, and a first control, and the first virtual object and the second virtual object are located in a virtual world , And the first virtual object and the second virtual object belong to different camps, and the first control is used to control the first virtual object to use virtual props to change target attribute values of other virtual objects;
  • a first determining module configured to determine a target virtual object from at least one of the second virtual objects, and mark the target virtual object in a predetermined manner
  • a receiving module configured to receive a first trigger signal acting on the first control
  • the first control module is configured to control the first virtual object to use virtual props to change the target attribute value of the target virtual object in response to the first trigger signal meeting the automatic control condition.
  • an embodiment of the present application provides a computer device, the computer device includes a processor and a memory, and at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to implement The control method of the virtual object as described in the above aspect.
  • an embodiment of the present application provides a computer-readable storage medium in which at least one section of a program is stored, and the at least one section of the program is loaded and executed by a processor to implement the above-mentioned aspects.
  • the control method of the virtual object is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to the control method of the virtual object.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the terminal reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal executes the virtual object control method provided in the various optional implementation manners of the foregoing aspects.
  • the target virtual object in the game interface is marked in a predetermined manner, and upon receiving the first trigger signal that meets the automatic control condition acting on the first control, the first virtual object is controlled to change the target of the target virtual object Attribute value; mark the target virtual object before controlling the first virtual object to change the target attribute value of the target virtual object, so that when the user is not performing an operation, the target virtual object can be known through the mark. If the target virtual object is If the desired object does not match, you can change the target virtual object through other operations.
  • the target virtual object matches the expected object, you can quickly perform the operation through the first control, which improves the control efficiency and control accuracy of the virtual object; at the same time, it can avoid the execution During the operation, it is also necessary to confirm and mark the operation object, which can reduce the operation execution time delay, thereby improving the efficiency of human-computer interaction.
  • Fig. 1 is a schematic diagram of an implementation environment provided according to an exemplary embodiment of the present application
  • Fig. 2 is a flowchart of a method for controlling a virtual object according to an exemplary embodiment of the present application
  • Fig. 3 is a schematic diagram of a game interface provided according to an exemplary embodiment of the present application.
  • Fig. 4 is a schematic diagram of target virtual object marking provided according to an exemplary embodiment of the present application.
  • Fig. 5 is a schematic diagram of a game interface provided according to another exemplary embodiment of the present application.
  • Fig. 6 is a flowchart of a method for controlling a virtual object according to another exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram of determining a candidate virtual object according to an exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of a first control provided according to an exemplary embodiment of the present application.
  • FIG. 9 is a schematic diagram of a game interface provided according to another exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of a game interface provided according to another exemplary embodiment of the present application.
  • Fig. 11 is a flowchart of a method for controlling a virtual object according to another exemplary embodiment of the present application.
  • Fig. 12 is a schematic diagram of a game interface provided according to another exemplary embodiment of the present application.
  • Fig. 13 is a flowchart of a method for controlling a virtual object according to another exemplary embodiment of the present application.
  • Fig. 14 is a structural block diagram of a virtual object control device provided according to an exemplary embodiment of the present application.
  • Fig. 15 is a structural block diagram of a terminal provided according to an exemplary embodiment of the present application.
  • Fig. 16 is a structural block diagram of a server provided according to an exemplary embodiment of the present application.
  • the "plurality” mentioned herein means two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects before and after are in an "or” relationship.
  • Virtual world It is the virtual world displayed (or provided) when the application is running on the terminal.
  • the virtual world may be a simulation of the real world, a semi-simulated and semi-fictional three-dimensional world, or a purely fictitious three-dimensional world.
  • the virtual world can be any of a two-dimensional virtual world, a 2.5-dimensional virtual world, and a three-dimensional virtual world.
  • the virtual world is also used for a virtual world battle between at least two virtual objects, and there are virtual resources available for the at least two virtual objects in the virtual world.
  • the virtual world includes a symmetrical lower-left corner area and an upper-right corner area. Virtual objects belonging to two rival camps occupy one of the areas respectively, and destroy the target building/base/base/crystal in the depths of the opponent's area. Victory goal.
  • Virtual object refers to the movable object in the virtual world.
  • the movable object may be at least one of a virtual character, a virtual animal, and an animation character.
  • the virtual object when the virtual world is a three-dimensional virtual world, the virtual object may be a three-dimensional model, and each virtual object has its own shape and volume in the three-dimensional virtual world and occupies a part of the space in the three-dimensional virtual world.
  • the virtual object is a three-dimensional character constructed based on three-dimensional human bone technology, and the virtual object realizes different external images by wearing different skins.
  • the virtual object may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited in the embodiment of the present application.
  • Multiplayer online tactical competitive game refers to a virtual world where different virtual teams belonging to at least two rival camps occupy their respective map areas and compete with a certain victory condition as the goal.
  • the victory conditions include but are not limited to: occupying a stronghold or destroying the enemy camp’s stronghold, killing the virtual object of the enemy camp, ensuring one’s own survival in a specified scene and time, grabbing a certain resource, and surpassing the opponent’s score within a specified time At least one of.
  • Tactical competition can be carried out in units of rounds, and the map of each round of tactical competition can be the same or different.
  • Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5, etc.
  • Virtual props refers to the props that virtual objects can use in the virtual environment, including pistols, rifles, sniper guns, daggers, knives, swords, axes, ropes and other virtual weapons that can cause damage to other virtual objects, bullets and other supply props. Shields, armors, armored vehicles and other defensive props, virtual beams, virtual shock waves and other virtual props displayed through hands when virtual objects release skills, and part of the body torso of virtual objects, such as hands and legs.
  • the virtual props in the embodiment of the present application refer to the props equipped by default by the virtual object.
  • UI control refers to any visual control or element that can be seen on the user interface of the application, such as pictures, input boxes, text boxes, buttons, labels and other controls. Some of the UI controls respond User actions.
  • FIG. 1 shows a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.
  • the first terminal 110 installs and runs an application program 111 supporting the virtual world, and the application program 111 may be a multiplayer online battle program.
  • the user interface of the application program 111 is displayed on the screen of the first terminal 110.
  • the application program 111 may be any of a military simulation program, a MOBA game, a battle royale shooting game, and a simulation strategy game (Simulation Game, SLG).
  • the application 111 is a MOBA game as an example.
  • the first terminal 110 is a terminal used by the first user 112.
  • the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual world to perform activities.
  • the first virtual object may be referred to as the master virtual of the first user 112. Object.
  • the activities of the first virtual object include, but are not limited to, at least one of adjusting body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing, and releasing skills.
  • the first virtual object is a first virtual character, such as a simulated character or an animation character.
  • the second terminal 130 installs and runs an application program 131 supporting the virtual world, and the application program 131 may be a multiplayer online battle program.
  • the application program 131 may be a multiplayer online battle program.
  • the client can be any one of a military simulation program, a MOBA game, a battle royale shooting game, and an SLG game.
  • the application 131 is an MOBA game as an example.
  • the second terminal 130 is the terminal used by the second user 132.
  • the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual world to perform activities.
  • the second virtual object may be referred to as the master virtual of the second user 132. Role.
  • the second virtual object is a second virtual character, such as a simulated character or an animation character.
  • the first virtual object and the second virtual object are in the same virtual world.
  • the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication permissions.
  • the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
  • the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms (Android or IOS).
  • the first terminal 110 may generally refer to one of multiple terminals, and the second terminal 130 may generally refer to another of multiple terminals. This embodiment only uses the first terminal 110 and the second terminal 130 as examples.
  • the device types of the first terminal 110 and the second terminal 130 are the same or different.
  • the device types include smart phones, tablet computers, e-book readers, MP3 players, MP4 players, laptops and desktop computers. At least one.
  • terminals Only two terminals are shown in FIG. 1, but there are multiple other terminals that can access the server 120 in different embodiments.
  • terminals there are one or more terminals corresponding to the developer, and a development and editing platform supporting virtual world applications is installed on the terminal, and the developer can edit and update the application on the terminal , And transmit the updated application installation package to the server 120 via a wired or wireless network.
  • the first terminal 110 and the second terminal 130 can download the application installation package from the server 120 to update the application.
  • the first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
  • the server 120 includes at least one of a server, a server cluster composed of multiple servers, a cloud computing platform, and a virtualization center.
  • the server 120 is used to provide background services for applications supporting the three-dimensional virtual world.
  • the server 120 is responsible for the main calculation work, and the terminal is responsible for the secondary calculation work; or, the server 120 is responsible for the secondary calculation work, and the terminal is responsible for the main calculation work; or, the server 120 and the terminal adopt a distributed computing architecture for collaborative calculation. .
  • the server 120 includes a memory 121, a processor 122, a user account database 123, a battle service module 124, and a user-oriented input/output interface (Input/Output Interface, I/O interface) 125.
  • the processor 122 is used to load instructions stored in the server 120 and process data in the user account database 123 and the battle service module 124; the user account database 123 is used to store the first terminal 110, the second terminal 130, and other terminals.
  • the data of the user account of the user account such as the avatar of the user account, the nickname of the user account, the combat power index of the user account, and the service area where the user account is located; 3V3 battle, 5V5 battle, etc.; the user-oriented I/O interface 125 is used to establish communication and exchange data with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network.
  • FIG. 2 shows a flowchart of a method for controlling a virtual object provided by an exemplary embodiment of the present application.
  • the method is used in the first terminal 110 or the second terminal 130 in the implementation environment shown in FIG. 1 or other terminals in the implementation environment as an example for description.
  • the method includes the following steps:
  • Step 201 Display a game interface, which includes a first virtual object, at least one second virtual object, and a first control.
  • the first virtual object and the second virtual object are located in the virtual world, and the first virtual object and the second virtual object belong to different camps, and the first control is used to control the first virtual object to use virtual props to change the target attributes of other virtual objects value.
  • the game interface includes a virtual world screen and a control layer located on the virtual world screen;
  • the virtual world screen includes a first virtual object and at least one second virtual object, and the first virtual object is A virtual object belonging to the first camp, and the second virtual object is a virtual object belonging to the second camp.
  • the two are in a hostile relationship.
  • the second virtual object includes virtual objects in the second camp controlled by other terminals, and the second camp
  • the virtual objects controlled by the server include soldiers controlled by the server and virtual buildings that can be conquered.
  • the second virtual object also includes a virtual object belonging to a third camp, and the third camp is controlled by the server, such as monsters in the virtual world.
  • the virtual world is a virtual world with an arbitrary boundary shape
  • the first virtual object is located within the visible range of the game interface.
  • the first virtual object is located at the visual center of the virtual world picture, that is, at the center of the virtual world picture obtained by observing the virtual world from a third-person perspective.
  • the angle of view refers to the angle of observation when the virtual character is observed in the virtual world from the first person perspective or the third person perspective.
  • the angle of view is the angle when the virtual character is observed through the camera model in the virtual world.
  • the camera model automatically follows the virtual object in the virtual world, that is, when the position of the virtual object in the virtual world changes, the camera model follows the position of the virtual object in the virtual world and changes at the same time, and the camera The model is always within the preset distance range of the virtual object in the virtual world.
  • the relative position of the camera model and the virtual object does not change.
  • the embodiment of the present application takes the third-person perspective as an example for description.
  • the camera model is located behind the virtual object (such as the head and shoulders of the virtual person).
  • the control layer of the game interface includes a first control for controlling the first virtual object to use virtual props to change the target attribute value of other virtual objects.
  • the first control is used for The basic operation of controlling the first virtual object can also be called a common attack control.
  • FIG. 3 shows a game interface.
  • the virtual world screen of the game interface contains the first virtual object 301 and the second virtual object 302, as well as virtual environments such as buildings, plants, and roads within the field of view;
  • the control layer of the game interface includes the first control 303 and other skill controls 304 , And a direction control 305 for controlling the movement and changing direction of the first virtual object 301.
  • the user can trigger such controls by clicking, dragging and so on.
  • the control layer also includes a map control 306, which is used to display the virtual world.
  • the control 307 used to display information such as the record and the duration of the game, and the UI control used for other functions such as game setting, voice call, and message sending.
  • Step 202 Determine a target virtual object from at least one second virtual object, and mark the target virtual object in a predetermined manner.
  • the terminal searches for the second virtual object in real time, determines the second virtual object that meets the preset condition as the target virtual object, and marks the target virtual object.
  • the preset condition may be: the second virtual object meets the common attack target, the distance between the second virtual object and the first virtual object meets the preset distance condition (for example, the closest distance to the first virtual object), the second virtual object The attribute value of the object satisfies at least one of the preset attribute value conditions (for example, the second virtual object has the lowest life value or defense value) and the target camp to which the second virtual object belongs belongs to at least one of the preset camp.
  • the preset condition may also be a condition preset by the user, for example, setting a priority to attack a certain type of virtual object. The embodiment of the present application does not limit the method of selecting the target virtual object.
  • the predetermined way of marking the target virtual object includes highlighting the character image of the target virtual object and/or the edge of the target attribute information bar, changing the color of the information carried by the target virtual object, and displaying the character of the target virtual object.
  • a special mark is added near the image (for example, directly below), and a ray pointing from the first virtual object to the target virtual object is displayed.
  • FIG. 4 shows a way of marking the target virtual object 401.
  • the terminal After determining the target virtual object 401 from the second virtual object, the terminal adds a highlight effect to the outer edge of the target attribute information bar 402 above the target virtual object 401, and displays a positioning mark 403 directly below the target virtual object 401.
  • a first virtual object 501, a second virtual object 502a, and a second virtual object 502b are displayed on the game interface.
  • the terminal determines from the two second virtual objects that the second virtual object 502a is For the target virtual object, the second virtual object 502a is marked in the aforementioned predetermined manner.
  • Step 203 Receive a first trigger signal acting on the first control.
  • the first trigger signal is generated when the user performs a trigger operation on the first control.
  • the user can trigger the first trigger signal by clicking, dragging, or the like on the first control.
  • Step 204 In response to the first trigger signal meeting the automatic control condition, control the first virtual object to use the virtual prop to change the target attribute value of the target virtual object.
  • the terminal in order to implement different control operations on the first virtual object, is preset with control instructions for different first trigger signals, and different operations performed by the user on the first control will generate different corresponding The first trigger signal, thereby controlling the first virtual object to perform a corresponding operation according to a corresponding control instruction.
  • automatic control conditions are preset in the terminal.
  • the first trigger signal generated by the user's touch operation on the first control meets the automatic control conditions, it indicates the control instruction corresponding to the first trigger signal It is: release the skill to the pre-marked target virtual object, so as to realize the control of the first virtual object to use the virtual prop to change the target attribute value of the target virtual object.
  • the terminal controls the first virtual object to change the target attribute value of the target virtual object based on the first trigger signal, and the target attribute value includes the remaining health value (or called the remaining life value), the remaining energy value, At least one of the remaining mana value and other attribute values.
  • the first virtual object uses virtual props to change the target attribute value of the target virtual object, which can be expressed as the first virtual object uses the virtual prop to attack the target virtual object, thereby reducing the target virtual object The remaining life value.
  • the target virtual object in the game interface is marked in a predetermined manner, and when the first trigger signal that meets the automatic control condition for the first control is received, the first virtual object is controlled to change The target attribute value of the target virtual object; before the first virtual object is controlled to change the target attribute value of the target virtual object, the target virtual object is marked, so that the user can know the target virtual object affected by the operation by marking when the user is not performing an operation. If the target virtual object does not match the desired object, other operations can be used to change the target virtual object. If the target virtual object matches the desired object, the operation can be quickly executed through the first control, which improves the control efficiency and control accuracy of the virtual object.
  • FIG. 6 shows a flowchart of a method for controlling a virtual object provided by another exemplary embodiment of the present application.
  • the method is used in the first terminal 110 or the second terminal 130 in the implementation environment shown in FIG. 1 or other terminals in the implementation environment as an example for description.
  • the method includes the following steps:
  • Step 601 Display a game interface.
  • the game interface includes a first virtual object, at least one second virtual object, and a first control.
  • step 601 For the implementation manner of step 601, reference may be made to step 201 above, and details are not described herein again in the embodiment of the present application.
  • Step 602 in response to the at least one second virtual object not containing the actively selected virtual object, obtain first object information of the first virtual object and second object information of the at least one second virtual object.
  • the first control has two functions: automatically controlling the first virtual object to attack and actively selecting the attack target to control the first virtual object to attack, and the attack priority of the actively selected attack target is higher than the attack of the automatic control operation. Attack priority of the target.
  • the active selection of the virtual object is the virtual object selected by the user by triggering the first control
  • the object information is used to characterize the state and position of the virtual object.
  • the position in the object information is used to indicate the position of the virtual object in the virtual world
  • the state in the object information is used to indicate the current attribute values of the virtual object.
  • the first object information includes the first virtual object in the virtual world.
  • the second object information includes the first virtual object. 2. At least one of the coordinates of the virtual object in the virtual world and state information such as defense value, remaining life value, and remaining energy value.
  • the terminal needs to select the virtual object according to the information of the first object and the second object.
  • the information is automatically searched to determine the target virtual object from the at least one second virtual object.
  • the determined target virtual object is the release target of the preset attack skill (the attack skill that can be released by triggering the first control), correspondingly, it is necessary to ensure that the preset attack skill can act on the target virtual object, or can
  • the target virtual object causes the expected damage, and whether the above-mentioned skill release effect can be achieved is related to the position information, remaining life value, defense value, and remaining energy value of the first virtual object and the second virtual object. Therefore, one In a possible implementation manner, when the terminal needs to select a target virtual object from a plurality of second virtual objects, it needs to first obtain the object information of the first virtual object and each second virtual object, so as to obtain the object information from the plurality of second virtual objects based on the object information. The best target virtual object is selected from the second virtual object.
  • Step 603 Determine a target virtual object from at least one second virtual object according to the first object information and the second object information.
  • the terminal determines the second virtual object as the target virtual object; if there is no second object whose second object information meets the preset condition Virtual object, there is no target virtual object.
  • the number of target virtual objects is one.
  • this application only takes the number of target virtual objects as an example for description, and does not limit the number of target virtual objects. If there are two or more target virtual objects that meet the preset conditions, the corresponding target virtual object The number of objects can be two or more.
  • the number of selected target virtual objects can also be determined by the attack type corresponding to the preset attack skill.
  • the preset attack skill can only act on a single virtual object.
  • only a single target virtual object can be selected.
  • the preset attack skill can act on two or more virtual objects.
  • the terminal can select two or more target virtual objects that meet the preset conditions.
  • the first object information includes a first position and a first range
  • the second object information includes a second position
  • the first position is the position of the first virtual object in the virtual world
  • the second position is the position of the second virtual object in the virtual world
  • the first range is the use range of the virtual item.
  • step 603 includes the following steps one to three:
  • Step 1 Determine a second range according to the first range, and the second range is greater than the first range.
  • the second range is the range in which the terminal searches for the target virtual object.
  • the second range in order to include the second virtual object near the first virtual object into the search range, is set to be larger than the first range.
  • the first range and the second range are both circular ranges, or both are fan-shaped regions in which the first virtual object faces a predetermined direction and a predetermined angle.
  • the terminal sets the radius of the second range to the radius of the first range + k.
  • the first range is a circular area
  • the second range is set to the same center and the radius is 2 meters larger than the radius of the first range.
  • the circular area is the area in the virtual world where the circle with the first position as the center and the radius of 5 meters is located, then the second range is determined to be the circle with the first position as the center and the radius of 7 meters in the virtual world.
  • the area where you are located; or the first range is a fan-shaped area in the virtual world with the first location as the center, 5 meters as the radius, 45° angle and in front of the first virtual object, then the second range is determined to be in the virtual world A circular area with the first position as the center and a radius of 7 meters.
  • the target virtual object is the skill release target that launches the preset attack skill, that is, the target virtual object needs to be within the preset skill release range. Therefore, in a possible implementation manner, it may be based on the preset attack
  • the skill release range of the skill determines the second range.
  • the second range may be less than or equal to the skill release range.
  • different attack skills can correspond to different skill release ranges, and correspondingly, different attack skills can set different second ranges.
  • Step 2 Determine the second virtual object located in the second range as the candidate virtual object according to the first position and the second position.
  • the terminal determines the second virtual object located in the second range as the candidate virtual object, and then determines the target virtual object from the candidate virtual objects according to other conditions.
  • the virtual objects in the current virtual world include a first virtual object 701, a second virtual object 702a, and a second virtual object 702b.
  • 703 determines the second range 704, and detects that the second virtual object 702a is located in the second range 704, then the second virtual object 702a is determined as a candidate virtual object.
  • Step 3 Determine the candidate virtual object that meets the selection condition as the target virtual object.
  • the selection condition includes at least one of the following: the distance to the first virtual object is the smallest, the target attribute value is the lowest, and it belongs to the target camp.
  • the terminal determines a target virtual object that meets the selection condition from the candidate virtual objects.
  • the target virtual object is a priority attack target at the current moment automatically determined by the terminal. If there is no target virtual object that meets the selection condition Is a candidate virtual object, the terminal determines that the target virtual object does not exist at the current moment.
  • the target camp includes the second camp and the third camp, where the second camp is the camp that has a hostile relationship with the first camp to which the first virtual object belongs, and the third camp includes virtual objects such as monsters controlled by the server. Camp.
  • the candidate virtual objects that meet the selection conditions include candidate virtual objects belonging to the second camp and candidate virtual objects belonging to the third camp
  • the terminal preferentially subordinates to the candidate virtual objects belonging to the second camp
  • the target virtual object is determined, and if the target virtual object does not exist among the candidate virtual objects belonging to the second camp, the target virtual object is determined from the candidate virtual objects belonging to the third camp.
  • the killing efficiency may be related to the remaining life value and remaining defense value corresponding to the target virtual object. For example, if the life value of candidate virtual object A is higher than that of candidate virtual object B, the same skill operations are released on the two candidate virtual objects. Obviously, the probability of killing candidate virtual object B is greater than the probability of killing virtual object A. Therefore, In order to improve the killing efficiency of the first virtual object, the candidate virtual object B may be determined as the target virtual object, that is, the selected target virtual object has the lowest target attribute value.
  • the target attribute value may include the remaining life value. , Remaining energy value, defense value and other attribute values.
  • the hit rate of the attack skill on the target virtual object may also be related to the distance between the target virtual object and the first virtual object, it can be known that the farther the distance is, the hit rate will be relatively reduced. Therefore, in order to further improve the preset
  • the hit rate of the attack skill takes the minimum distance from the first virtual object as one of the preset conditions for selecting the target virtual object.
  • the selection conditions may further include: whether the preset attack skill can act on the target virtual object, whether the target attack acts on the target virtual object with a probability of being invalidated, and so on.
  • the second virtual object 702a satisfies the selection condition, it is determined that the second virtual object 702a is the target virtual object; if the second virtual object 702a does not meet the selection condition, it is determined that there is no target virtual object currently. Object.
  • Step 604 Mark the target virtual object in a first predetermined manner.
  • the first predetermined method includes highlighting the character image of the target virtual object and/or the edge of the target attribute information bar, changing the color of the information carried by the target virtual object, and being near the character image of the target virtual object (for example, positive Below) add special marks, display the ray pointing from the first virtual object to the target virtual object, etc.
  • Step 605 In response to the at least one second virtual object containing the actively selected virtual object, the actively selected virtual object is determined as the target virtual object.
  • the active virtual object selection is the target of the next attack selected by the user through the first control, which may be the same as the target virtual object determined by the terminal automatic search, or it may be the same as the target determined by the terminal automatic search.
  • the virtual object is different, and the attack priority of actively selecting the virtual object is higher than the attack priority of the target virtual object automatically searched and determined by the terminal. Therefore, when the second virtual object contains the actively selected virtual object, the terminal will directly determine the active selection of the virtual object For the target virtual object, the process of determining the candidate virtual object is not performed.
  • the terminal directly determines the second virtual object 702b as the target virtual object.
  • Step 606 Mark the target virtual object in a second predetermined manner.
  • the second predetermined manner is the same as the first predetermined manner, or the second predetermined manner is different from the first predetermined manner, and the significance of the marking effect in the second predetermined manner is higher than the significance of the marking effect in the first predetermined manner.
  • the second predetermined method is different from the first predetermined method of marking.
  • the second predetermined method uses a different marking position from the first predetermined method, or the marking position is the same but uses a different color, for example, the first predetermined method
  • the method is to highlight the edge of the target attribute information bar of the target virtual object, and the second predetermined method is to add a positioning mark directly below the character image of the target virtual object.
  • Step 607 Receive a first trigger signal acting on the first control.
  • step 607 For the implementation manner of step 607, reference may be made to step 203 above, and details are not described herein in the embodiment of the present application.
  • Step 608 In response to the touch end position corresponding to the first trigger signal being located in the first automatic control area, it is determined that the first trigger signal meets the automatic control condition.
  • the first control includes a first automatic control area and a first active control area. There is no intersection between the first automatic control area and the first active control area.
  • the trigger operation of is used to trigger a rapid attack on the target virtual object; and the trigger operation in the first active control area is used to trigger the user to independently select the target virtual object.
  • the first control is a circular control
  • the first automatic control area is a circle
  • the active control area is an annular area around the first automatic control area; or the first automatic control area is the left half of the first control
  • the circular area, the active control area is a semicircular area on the right side of the first control, which is not limited in the embodiment of the present application.
  • FIG. 8 shows a schematic diagram of a first control.
  • the first control is a circular control, where the first automatic control area 801 is a circular area with the center of the first control as the center and a radius smaller than the radius of the first control.
  • the first active control area 802 is divided by the first control.
  • the terminal determines the first trigger signal Meet the conditions of automatic control. That is, the user can control the first virtual object to change the target attribute value of the target virtual object by quickly clicking the first control.
  • Step 609 In response to the first trigger signal meeting the automatic control condition, control the first virtual object to use the virtual prop to change the target attribute value of the target virtual object.
  • the terminal determines whether the first trigger signal meets the automatic control condition according to the touch end position. When the touch end position is in the first automatic control area, it determines that the automatic control condition is satisfied. When it is located in the first active control area or an area other than the first control, it is determined that the automatic control condition is not satisfied.
  • the first control includes a first automatic control area 901 and a first active control area 902, the user clicks on the first control, and the touch end position 903 is located in the first automatic control area 901, that is, in the figure When the finger is lifted at the touch end position 903 shown in, the terminal determines that the first trigger signal meets the automatic control condition.
  • Step 610 In response to the touch end position corresponding to the first trigger signal being located in the first active control area, it is determined that the first trigger signal meets the active control condition.
  • the user can actively select the target virtual object through a touch operation on the first control.
  • the terminal obtains the touch end position of the first trigger signal, and determines that the touch end is the position Located in the first active control area, it is determined that the first trigger signal meets the active control condition, so as to determine the target virtual object that the user needs to select based on the end position of the touch.
  • the user can press and hold the first control with a finger while dragging, and according to the position of the first virtual object and the expected attack object, stop the finger at the corresponding position of the first active control area to complete the active selection of the virtual object. process.
  • the first control includes a first automatic control area 1001 and a second automatic control area 1002.
  • the touch end position 1003 is located in the second active area.
  • the terminal determines that the first trigger signal meets the active control condition.
  • Step 611 Determine the second virtual object mapped to the touch end position as the actively selected virtual object.
  • the terminal determines the mapping position of the virtual world according to the position of the user's finger in the second automatic control area in real time, and determines to actively select the virtual object according to the second virtual object mapped from the touch end position.
  • the terminal can mark the mapping position of the touch end position in the virtual world with a ray, fan-shaped area, etc. in the game interface.
  • the user's touch operation corresponds to the virtual world
  • the second virtual object is determined as the actively selected virtual object, and step 606 is executed to mark the actively selected virtual object .
  • the range where the user chooses to actively select the object is the range of the virtual world included in the game interface.
  • the terminal maps the center point 1004 of the first control to the position of the first virtual object 1005, determines the mapping position of the touch end position 1003 in the game interface, and in the game interface The line between the two mapped positions is displayed.
  • the touch end position 1003 is mapped to the position of the second virtual object 1006, the second virtual object 1006 is determined to be an actively selected virtual object, and the second virtual object 1006 is marked.
  • Step 612 Control the first virtual object to use the virtual prop to change the target attribute value of the actively selected virtual object.
  • the terminal controls the first virtual object to use the virtual prop to change the target attribute value of the actively selected virtual object.
  • the terminal controls the first virtual object to complete the operation of actively selecting the virtual object to change the target attribute value
  • the terminal still keeps the mark of the actively selected virtual object, and determines the actively selected virtual object as the target virtual object.
  • the first virtual object is controlled to change the target attribute value of the actively selected virtual object.
  • the terminal controls the first virtual object to move in the direction of actively selecting the object, and uses the virtual object when the actively selected virtual object is located in the first virtual object.
  • the first virtual object is controlled to use the virtual item to change the target attribute value of the actively selected virtual object.
  • the first control is divided into regions so that the user can actively select the virtual object.
  • the actively selected virtual object is directly determined as the target virtual object and marked.
  • the second virtual object that meets the preset conditions is determined as the target virtual object and marked.
  • the user can control the first virtual object to change the target attribute value of the target virtual object through quick operation;
  • the target virtual object is marked before the touch operation, so that the user can grasp the object of the operation to change the target attribute value in advance.
  • the user can move or aim by controlling the first virtual object Selecting other virtual objects avoids the situation that the user operation does not reach the expected effect due to the difference between the target virtual object and the expected virtual object, and thus the operation needs to be readjusted, thereby improving the operation efficiency.
  • control layer of the game interface also includes other controls for controlling the first virtual object to release target skills on the target virtual object.
  • the method for controlling the virtual object further includes the following steps:
  • Step 205 in response to receiving the second trigger signal acting on the second control, control the first virtual object to release the target skill on the target virtual object.
  • the game interface further includes a second control, and the second control is used to control the first virtual object to release the target skill to other virtual objects.
  • the game interface includes at least one second control 304 for controlling the first virtual object to release target skills to other virtual objects.
  • the second control includes a second automatic control area and a second active control area. There is no intersection between the second automatic control area and the second active control area.
  • Step 205 includes the following steps 1 and two:
  • Step 1 In response to the touch end position corresponding to the second trigger signal being located in the second automatic control area, the skill release rule of the target skill is acquired.
  • the terminal first obtains the skill release rule of the target skill.
  • the skill release rule includes the type of the skill release target and the skill release range. , Skills required operations, etc.
  • the second control 1200 includes a second automatic control area 1201 and a second active control area 1202.
  • the terminal acquires The skill release rules for skill 3.
  • Step 2 In response to the target virtual object conforming to the skill release rule, the first virtual object is controlled to release the target skill to the target virtual object.
  • the target virtual object is a target virtual object automatically searched and determined by the terminal, or an actively selected object selected by the user by triggering the first control.
  • the terminal may select the target virtual object based on the skill release rule.
  • the terminal controls the first virtual object 1204 to release skill 3 to the target virtual object 1205. If the target virtual object 1205 is outside the release range of skill 3, the terminal controls the first virtual object 1204 to move in the direction of the target virtual object 1205, and when the target virtual object 1205 is within the release range of skill 3, the first virtual object is controlled 1204 Skill 3 is released.
  • Step 206 in response to the target attribute value of the target virtual object reaching the attribute value threshold, re-determine the target virtual object from the at least one second virtual object.
  • the target attribute value of the target virtual object reaching the attribute value threshold includes at least one of the following situations: the remaining life value of the target virtual object reaches the life value threshold, for example, the life value threshold is 0, when the remaining life value of the target virtual object At 0, the target attribute value is met and the attribute value threshold is reached; the position of the target virtual object is outside the range displayed on the game interface.
  • the target attribute value as the remaining life value as an example, if the remaining life value corresponding to the target virtual object is 0, it means that the target virtual object is killed and cannot continue to be the target virtual object, and the terminal needs to be restarted Obtain object information corresponding to the first virtual object and the remaining second virtual objects, so as to determine the target virtual object from the second virtual object based on the object information.
  • the second active control area and the second automatic control area are divided into the second control, so that the user can quickly control the first virtual object to release the target skill on the target virtual object, which simplifies the operation steps of some skills , Which saves the user's operating time.
  • the MOBA game includes a presentation layer and a logic layer.
  • FIG. 13 shows a flowchart of a virtual object control method provided by another exemplary embodiment of the present application.
  • the method is used in the first terminal 110 or the second terminal 130 in the implementation environment shown in FIG. 1 or other terminals in the implementation environment as an example for description. The method includes the following steps:
  • Step 1301 The presentation layer obtains the end position of the touch operation.
  • the presentation layer obtains the touch operation in real time, and obtains the end position of the touch operation when it detects that the user raises his hand.
  • Step 1302 The presentation layer judges whether the target virtual object meets the skill release rule.
  • the presentation layer acquires a target skill corresponding to the touch operation, and the target skill includes a basic skill corresponding to the first control and a special skill corresponding to the second control. According to the type of the target skill and the skill release rule, it is judged whether the target virtual object meets the skill release rule, and when the target virtual object meets the skill release rule, step 1303 is continued.
  • Step 1303 When the target virtual object meets the skill release rule, the presentation layer sends skill release information to the logic layer.
  • the performance layer judges that the target virtual object satisfies the skill release rule, it needs further judgment by the logic layer, so as to avoid the wrong judgment of the performance layer caused by the delay of the picture or the cheating behavior of the user.
  • Step 1304 The logic layer judges whether the target virtual object meets the skill release rule.
  • Step 1305 The logic layer sends the judgment result to the presentation layer.
  • the logic layer sends the judgment result to the presentation layer. If the result indicates that the target virtual object does not meet the skill release rule, the subsequent steps are not executed.
  • the display result in the game interface is that there is no response after the user triggers the control, and the target skill is not released.
  • step 1306 when the logic layer determines that the skill is allowed to be released, the presentation layer sends a skill release request to the server.
  • Step 1307 The server forwards the skill release request.
  • the server receives the skill release request sent by the terminal presentation layer, obtains the target terminal according to the skill release information in the skill release request, and forwards the skill release request to the logic layer of the target terminal.
  • the target terminal is all terminals participating in the current game.
  • step 1308 the logic layer performs skill release calculation processing.
  • the logic layer When the logic layer receives the skill release request forwarded by the server, it determines to perform the skill release operation and performs the skill release calculation processing to obtain the skill release result, such as the target attribute value of the target virtual object after the skill is released.
  • Step 1309 the logic layer sends a skill release instruction.
  • Step 1310 the performance layer skills release performance.
  • the presentation layer renders the skill release effect in the game interface according to the skill release instruction of the logic layer.
  • FIG. 14 is a structural block diagram of a virtual object control device provided by an exemplary embodiment of the present application.
  • the device may be set in the first terminal 110 or the second terminal 130 in the implementation environment shown in FIG. 1 or other devices in the implementation environment.
  • Terminal the device includes:
  • the display module 1401 is configured to display a game interface, the game interface includes a first virtual object, at least one second virtual object, and a first control, the first virtual object and the second virtual object are located in the virtual world , And the first virtual object and the second virtual object belong to different camps, and the first control is used to control the first virtual object to use virtual props to change target attribute values of other virtual objects;
  • the first determining module 1402 is configured to determine a target virtual object from at least one of the second virtual objects, and mark the target virtual object in a predetermined manner;
  • the receiving module 1403 is configured to receive a first trigger signal acting on the first control
  • the first control module 1404 is configured to control the first virtual object to use virtual props to change the target attribute value of the target virtual object in response to the first trigger signal meeting the automatic control condition.
  • the first determining module 1402 includes:
  • the first obtaining unit is configured to obtain first object information of the first virtual object and second object information of at least one of the second virtual objects in response to at least one of the second virtual objects that do not include actively selected virtual objects.
  • Object information wherein the actively selected virtual object is a virtual object selected through the first control, and the object information is used to characterize the state and position of the virtual object;
  • a first determining unit configured to determine the target virtual object from at least one of the second virtual objects according to the first object information and the second object information;
  • the first marking unit is used to mark the target virtual object in a first predetermined manner.
  • the first object information includes a first position and a first range
  • the second object information includes a second position
  • the first position is that the first virtual object is in the virtual world
  • the position, the second position is the position of the second virtual object in the virtual world
  • the first range is the use range of the virtual prop
  • the first determining unit is further configured to:
  • the candidate virtual object that meets a selection condition is determined as the target virtual object, and the selection condition includes at least one of the following: the distance to the first virtual object is the smallest, the target attribute value is the lowest, and it belongs to the target camp.
  • the first determining module 1402 includes:
  • a second determining unit configured to determine the actively selected virtual object as the target virtual object in response to at least one of the second virtual objects including the actively selected virtual object
  • the second marking unit is used to mark the target virtual object in a second predetermined manner.
  • the first control includes a first automatic control area and a first active control area, and there is no intersection between the first automatic control area and the first active control area;
  • the device also includes:
  • the second determining module is configured to determine that the first trigger signal meets the automatic control condition in response to the touch end position corresponding to the first trigger signal being located in the first automatic control area.
  • the device further includes:
  • a third determining module configured to determine that the first trigger signal meets the active control condition in response to the touch end position corresponding to the first trigger signal being located in the first active control area;
  • a fourth determining module configured to determine the second virtual object mapped to the touch end position as an actively selected virtual object
  • the second control module is configured to control the first virtual object to use virtual props to change the target attribute value of the actively selected virtual object.
  • the game interface further includes a second control, and the second control is used to control the first virtual object to release target skills to other virtual objects;
  • the device also includes:
  • the third control module is configured to control the first virtual object to release the target skill on the target virtual object in response to receiving the second trigger signal acting on the second control.
  • the second control includes a second automatic control area and a second active control area, and there is no intersection between the second automatic control area and the second active control area;
  • the third control module includes:
  • the second acquiring unit is configured to acquire the skill release rule of the target skill in response to the touch end position corresponding to the second trigger signal being located in the second automatic control area;
  • the control unit is configured to control the first virtual object to release the target skill on the target virtual object in response to the target virtual object conforming to the skill release rule.
  • the device further includes:
  • the fifth determining module is configured to re-determine the target virtual object from at least one of the second virtual objects in response to the target attribute value of the target virtual object reaching the attribute value threshold.
  • the target attribute value of the target virtual object reaching the attribute value threshold includes: the remaining life value of the target virtual object reaches the life value threshold, and the position of the target virtual object is located on the game interface Outside the displayed range.
  • the virtual object control device marks the target virtual object in the game interface in a predetermined manner, and when it receives the first trigger signal that acts on the first control and meets the automatic control conditions, Control the first virtual object to change the target attribute value of the target virtual object; mark the target virtual object before controlling the first virtual object to change the target attribute value of the target virtual object, so that the user can know the operation location by marking when the user is not operating If the target virtual object does not match the expected object, you can change the target virtual object through other operations. If the target virtual object matches the expected object, you can quickly perform the operation through the first control, which improves the control of the virtual object. Efficiency and control accuracy; At the same time, it can avoid the need to confirm and mark the operation object during the execution of the operation, which can reduce the operation execution delay and improve the efficiency of human-computer interaction.
  • FIG. 15 shows a structural block diagram of a terminal provided by an embodiment of the present application.
  • the terminal 1500 includes a processor 1501 and a memory 1502.
  • the processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1501 can be implemented in at least one hardware form among Digital Signal Processing (DSP), Field Programmable Gate Array (FPGA), and Programmable Logic Array (PLA) .
  • the processor 1501 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a central processing unit (CPU);
  • the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1501 may be integrated with a graphics processing unit (GPU), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 1501 may further include an artificial intelligence (AI) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 1502 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1502 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1502 is used to store at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set or instruction set It is used to be executed by the processor 1501 to implement the virtual object control method provided in the method embodiment of the present application.
  • the terminal 1500 may optionally further include: a peripheral device interface 1503 and at least one peripheral device.
  • the processor 1501, the memory 1502, and the peripheral device interface 1503 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1503 through a bus, a signal line, or a circuit board.
  • the peripheral device may include: at least one of a communication interface 1504, a display screen 1505, an audio circuit 1506, a camera component 1507, a positioning component 1508, and a power supply 1509.
  • FIG. 15 does not constitute a limitation on the terminal 1500, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • FIG. 16 shows a schematic structural diagram of a server provided by an embodiment of the present application. Specifically:
  • the server 1600 includes a central processing unit (CPU) 1601, a system memory 1604 including a random access memory (Random Access Memory, RAM) 1602 and a read only memory (Read Only Memory, ROM) 1603, and a connection system
  • the server 1600 also includes a basic input/output (Input/Output, I/O) system 1606 that helps transfer information between various devices in the computer, and a storage system 1613, application programs 1614, and other program modules 1615.
  • the basic input/output system 1606 includes a display 1608 for displaying information and an input device 1609 such as a mouse and a keyboard for the user to input information.
  • the display 1608 and the input device 1609 are both connected to the central processing unit 1601 through the input and output controller 1610 connected to the system bus 1605.
  • the basic input/output system 1606 may also include an input and output controller 1610 for receiving and processing input from multiple other devices such as a keyboard, a mouse, or an electronic stylus.
  • the input and output controller 1610 also provides output to a display screen, a printer, or other types of output devices.
  • the mass storage device 1607 is connected to the central processing unit 1601 through a mass storage controller (not shown) connected to the system bus 1605.
  • the mass storage device 1607 and its associated computer-readable medium provide non-volatile storage for the server 1600. That is to say, the mass storage device 1607 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, Erasable Programmable Read Only Memory (EPROM), flash memory or other solid-state storage technologies, CD-ROM, Digital Video Disc (DVD) or others Optical storage, tape cartridges, magnetic tape, disk storage or other magnetic storage devices.
  • EPROM Erasable Programmable Read Only Memory
  • DVD Digital Video Disc
  • Optical storage tape cartridges, magnetic tape, disk storage or other magnetic storage devices.
  • the aforementioned system memory 1604 and mass storage device 1607 may be collectively referred to as memory.
  • the server 1600 may also be connected to a remote computer on the network to run through a network such as the Internet. That is, the server 1600 can be connected to the network 1612 through the network interface unit 1611 connected to the system bus 1605, or in other words, the network interface unit 1611 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes at least one instruction, at least one program, code set, or instruction set.
  • the at least one instruction, at least one program, code set, or instruction set is stored in the memory and configured to be used by one or more processors. Execute to realize the control method of the above virtual object.
  • a computer device is also provided.
  • the computer equipment can be a terminal or a server.
  • the computer device includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set or instruction set, and the at least one instruction, the at least one program, the code set or the instruction set consists of
  • the processor loads and executes to realize the control method of the virtual object described above.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the virtual The control method of the object.
  • the embodiments of the present application also provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the virtual object control method provided in the various optional implementation manners of the foregoing aspects.
  • the functions described in the embodiments of the present application may be implemented by hardware, software, firmware, or any combination thereof. When implemented by software, these functions can be stored in a computer-readable storage medium or transmitted as one or more instructions or codes on the computer-readable storage medium.
  • the computer-readable storage medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.

Abstract

一种虚拟对象的控制方法、装置、设备及存储介质,该方法包括:显示对局界面,对局界面中包含第一虚拟对象(301)、至少一个第二虚拟对象(302)以及第一控件(303);从至少一个第二虚拟对象(302)中确定出目标虚拟对象,并通过预定方式对目标虚拟对象进行标记;接收作用于第一控件(303)的第一触发信号;响应于第一触发信号符合自动控制条件,控制第一虚拟对象(301)使用虚拟道具改变目标虚拟对象的目标属性值。该方法使用户在未进行操作时,就可以通过标记知悉操作所作用的目标虚拟对象,若目标虚拟对象与期望对象不符,可以通过其他操作提前更改目标虚拟对象,提高了对虚拟对象的控制效率和控制准确度。

Description

虚拟对象的控制方法、装置、设备及存储介质
本申请要求于2020年04月23日提交的申请号为202010328506.0、发明名称为“虚拟对象的控制方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,特别涉及一种虚拟对象的控制方法、装置、设备及存储介质。
背景技术
在基于二维或三维虚拟环境的应用程序中,比如多人在线战术竞技游戏(Multiplayer Online Battle Arena,MOBA),用户可以通过操作虚拟场景中的虚拟对象,对其他虚拟对象、虚拟建筑等释放技能,以某一种胜利条件作为目标进行竞技。
相关技术中,用户通过点击技能控件向虚拟对象快速释放技能,该虚拟对象为服务器根据客户端数据确定出的默认虚拟对象,或者用户可以通过拖拽技能控件主动选择虚拟对象并释放技能,技能释放过程中,应用程序的表现层会对被释放技能的虚拟对象进行特殊标记,并显示在虚拟环境画面中。
然而,采用相关技术中在技能释放过程中对虚拟对象进行特殊标记的方法,用户只能在技能释放过程中确定释放的对象是否是预期的虚拟对象,而无法在释放技能前得知技能对应的虚拟对象,可能导致技能释放对象错误,从而造成技能资源的浪费。
发明内容
本申请实施例提供了一种虚拟对象的控制方法、装置、设备及存储介质,可以使用户在未进行操作时,就能够通过标记知悉操作所作用的目标虚拟对象,从而提高了对虚拟对象的控制效率和控制准确度。所述技术方案如下:
一方面,本申请实施例提供了一种虚拟对象的控制方法,所述方法应用于终端,所述方法包括:
显示对局界面,所述对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件,所述第一虚拟对象和所述第二虚拟对象位于虚拟世界中,且所述第一虚拟对象和所述第二虚拟对象属于不同阵营,所述第一控件用于控制所述第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值;
从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记;
接收作用于所述第一控件的第一触发信号;
响应于所述第一触发信号符合自动控制条件,控制所述第一虚拟对象使用虚拟道具改变所述目标虚拟对象的所述目标属性值。
另一方面,本申请实施例提供了一种虚拟对象的控制装置,所述装置包括:
显示模块,用于显示对局界面,所述对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件,所述第一虚拟对象和所述第二虚拟对象位于虚拟世界中,且所述第一虚拟对象和所述第二虚拟对象属于不同阵营,所述第一控件用于控制所述第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值;
第一确定模块,用于从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记;
接收模块,用于接收作用于所述第一控件的第一触发信号;
第一控制模块,用于响应于所述第一触发信号符合自动控制条件,控制所述第一虚拟对象使用虚拟道具改变所述目标虚拟对象的所述目标属性值。
另一方面,本申请实施例提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如上述方面所述的虚拟对象的控制方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如上述方面所述的虚拟对象的控制方法。
另一方面,根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式 中提供的虚拟对象的控制方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
本申请实施例中,通过预定方式标记对局界面中的目标虚拟对象,在接收到作用于第一控件的符合自动控制条件的第一触发信号时,控制第一虚拟对象改变目标虚拟对象的目标属性值;在控制第一虚拟对象改变目标虚拟对象的目标属性值之前标记出目标虚拟对象,使用户在未进行操作时,就可以通过标记知悉操作所作用的目标虚拟对象,若目标虚拟对象与期望对象不符,可以通过其他操作更改目标虚拟对象,若目标虚拟对象与期望对象相符,则可以通过第一控件快速执行操作,提高了对虚拟对象的控制效率和控制准确度;同时可以避免在执行操作过程中还需要确认和标记操作对象,从而可以降低操作执行时延,进而提高人机交互效率。
附图说明
图1是根据本申请一示例性实施例提供的实施环境的示意图;
图2是根据本申请一示例性实施例提供的虚拟对象的控制方法的流程图;
图3是根据本申请一示例性实施例提供的对局界面的示意图;
图4是根据本申请一示例性实施例提供的目标虚拟对象标记的示意图;
图5是根据本申请另一示例性实施例提供的对局界面的示意图;
图6是根据本申请另一示例性实施例提供的虚拟对象的控制方法的流程图;
图7是根据本申请一示例性实施例提供的确定候选虚拟对象的示意图;
图8是根据本申请一示例性实施例提供的第一控件的示意图;
图9是根据本申请另一示例性实施例提供的对局界面的示意图;
图10是根据本申请另一示例性实施例提供的对局界面的示意图;
图11是根据本申请另一示例性实施例提供的虚拟对象的控制方法的流程图;
图12是根据本申请另一示例性实施例提供的对局界面的示意图;
图13是根据本申请另一示例性实施例提供的虚拟对象的控制方法的流程图;
图14是根据本申请一示例性实施例提供的虚拟对象的控制装置的结构框 图;
图15是根据本申请一示例性实施例提供的终端的结构框图;
图16是根据本申请一示例性实施例提供的服务器的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
首先,对本申请实施例中涉及的名词进行简要介绍:
虚拟世界:是应用程序在终端上运行时显示(或提供)的虚拟世界。该虚拟世界可以是对真实世界的仿真世界,也可以是半仿真半虚构的三维世界,还可以是纯虚构的三维世界。虚拟世界可以是二维虚拟世界、2.5维虚拟世界和三维虚拟世界中的任意一种。可选地,该虚拟世界还用于至少两个虚拟对象之间的虚拟世界对战,在该虚拟世界中具有可供至少两个虚拟对象使用的虚拟资源。可选地,该虚拟世界包括对称的左下角区域和右上角区域,属于两个敌对阵营的虚拟对象分别占据其中一个区域,并以摧毁对方区域深处的目标建筑/据点/基地/水晶来作为胜利目标。
虚拟对象:是指在虚拟世界中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物中的至少一种。可选地,当虚拟世界为三维虚拟世界时,虚拟对象可以是三维立体模型,每个虚拟对象在三维虚拟世界中具有自身的形状和体积,占据三维虚拟世界中的一部分空间。可选地,虚拟对象是基于三维人体骨骼技术构建的三维角色,该虚拟对象通过穿戴不同的皮肤来实现不同的外在形象。在一些实现方式中,虚拟对象也可以采用2.5维或2维模型来实现,本申请实施例对此不加以限定。
多人在线战术竞技游戏:是指在虚拟世界中,分属至少两个敌对阵营的不同虚拟队伍分别占据各自的地图区域,以某一种胜利条件作为目标进行竞技的 游戏。该胜利条件包括但不限于:占领据点或摧毁敌对阵营据点、击杀敌对阵营的虚拟对象、在指定场景和时间内保证自身的存活、抢夺到某种资源、在指定时间内比分超过对方中的至少一种。战术竞技可以以局为单位来进行,每局战术竞技的地图可以相同,也可以不同。每个虚拟队伍包括一个或多个虚拟对象,比如1个、2个、3个或5个等。
虚拟道具:是指虚拟对象在虚拟环境中能够使用的道具,包括手枪、步枪、狙击枪、匕首、刀、剑、斧子、绳索等能够对其他虚拟对象发起伤害的虚拟武器,子弹等补给道具,盾牌、盔甲、装甲车等防御道具,虚拟光束、虚拟冲击波等用于虚拟对象释放技能时通过手部展示的虚拟道具,以及虚拟对象的部分身体躯干,比如手部、腿部。可选的,本申请实施例中的虚拟道具指虚拟对象默认装备的道具。
用户界面(User Interface,UI)控件:是指在应用程序的用户界面上能够看见的任何可视控件或元素,比如,图片、输入框、文本框、按钮、标签等控件,其中一些UI控件响应用户的操作。
请参考图1,其示出了本申请一个实施例提供的实施环境的示意图。该实施环境可以包括:第一终端110、服务器120和第二终端130。
第一终端110安装和运行有支持虚拟世界的应用程序111,该应用程序111可以是多人在线对战程序。当第一终端运行应用程序111时,第一终端110的屏幕上显示应用程序111的用户界面。该应用程序111可以是军事仿真程序、MOBA游戏、大逃杀射击游戏、模拟战略游戏(Simulation Game,SLG)的任意一种。在本实施例中,以该应用程序111是MOBA游戏来举例说明。第一终端110是第一用户112使用的终端,第一用户112使用第一终端110控制位于虚拟世界中的第一虚拟对象进行活动,第一虚拟对象可以称为第一用户112的主控虚拟对象。第一虚拟对象的活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、飞行、跳跃、驾驶、拾取、射击、攻击、投掷、释放技能中的至少一种。示意性的,第一虚拟对象是第一虚拟人物,比如仿真人物或动漫人物。
第二终端130安装和运行有支持虚拟世界的应用程序131,该应用程序131可以是多人在线对战程序。当第二终端130运行应用程序131时,第二终端130的屏幕上显示应用程序131的用户界面。该客户端可以是军事仿真程序、MOBA 游戏、大逃杀射击游戏、SLG游戏中的任意一种,在本实施例中,以该应用程序131是MOBA游戏来举例说明。第二终端130是第二用户132使用的终端,第二用户132使用第二终端130控制位于虚拟世界中的第二虚拟对象进行活动,第二虚拟对象可以称为第二用户132的主控虚拟角色。示意性的,第二虚拟对象是第二虚拟人物,比如仿真人物或动漫人物。
可选地,第一虚拟对象和第二虚拟对象处于同一虚拟世界中。可选地,第一虚拟对象和第二虚拟对象可以属于同一个阵营、同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。可选的,第一虚拟对象和第二虚拟对象可以属于不同的阵营、不同的队伍、不同的组织或具有敌对关系。
可选地,第一终端110和第二终端130上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台(安卓或IOS)上的同一类型应用程序。第一终端110可以泛指多个终端中的一个,第二终端130可以泛指多个终端中的另一个,本实施例仅以第一终端110和第二终端130来举例说明。第一终端110和第二终端130的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。
图1中仅示出了两个终端,但在不同实施例中存在多个其它终端可以接入服务器120。可选地,还存在一个或多个终端是开发者对应的终端,在该终端上安装有支持虚拟世界的应用程序的开发和编辑平台,开发者可在该终端上对应用程序进行编辑和更新,并将更新后的应用程序安装包通过有线或无线网络传输至服务器120,第一终端110和第二终端130可从服务器120下载应用程序安装包实现对应用程序的更新。
第一终端110、第二终端130以及其它终端通过无线网络或有线网络与服务器120相连。
服务器120包括一台服务器、多台服务器组成的服务器集群、云计算平台和虚拟化中心中的至少一种。服务器120用于为支持三维虚拟世界的应用程序提供后台服务。可选地,服务器120承担主要计算工作,终端承担次要计算工作;或者,服务器120承担次要计算工作,终端承担主要计算工作;或者,服务器120和终端之间采用分布式计算架构进行协同计算。
在一个示意性的例子中,服务器120包括存储器121、处理器122、用户账 号数据库123、对战服务模块124、面向用户的输入/输出接口(Input/Output Interface,I/O接口)125。其中,处理器122用于加载服务器120中存储的指令,处理用户账号数据库123和对战服务模块124中的数据;用户账号数据库123用于存储第一终端110、第二终端130以及其它终端所使用的用户账号的数据,比如用户账号的头像、用户账号的昵称、用户账号的战斗力指数,用户账号所在的服务区;对战服务模块124用于提供多个对战房间供用户进行对战,比如1V1对战、3V3对战、5V5对战等;面向用户的I/O接口125用于通过无线网络或有线网络和第一终端110和/或第二终端130建立通信交换数据。
请参考图2,其示出了本申请一个示例性实施例提供的虚拟对象的控制方法的流程图。本实施例以该方法用于图1所示实施环境中的第一终端110或第二终端130或该实施环境中的其它终端为例进行说明,该方法包括如下步骤:
步骤201,显示对局界面,对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件。
其中,第一虚拟对象和第二虚拟对象位于虚拟世界中,且第一虚拟对象和第二虚拟对象属于不同阵营,第一控件用于控制第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值。
在一种可能的实施方式中,对局界面包括虚拟世界画面和位于该虚拟世界画面之上的控件层;虚拟世界画面中包含第一虚拟对象和至少一个第二虚拟对象,第一虚拟对象是属于第一阵营的虚拟对象,第二虚拟对象是属于第二阵营的虚拟对象,二者属于敌对关系,例如,第二虚拟对象包括第二阵营中由其他终端控制的虚拟对象,以及第二阵营中由服务器控制的虚拟对象,包括由服务器控制的士兵、可以攻克的虚拟建筑等。
可选地,第二虚拟对象还包括属于第三阵营的虚拟对象,该第三阵营由服务器控制,例如虚拟世界中的野怪。
示意性的,虚拟世界是具有任意边界形状的虚拟世界,第一虚拟对象位于对局界面的可视范围内。可选的,第一虚拟对象位于虚拟世界画面的视觉中心,即位于采用第三人称视角观察虚拟世界得到的虚拟世界画面的中心。
视角是指以虚拟角色的第一人称视角或者第三人称视角在虚拟世界中进行观察时的观察角度。可选地,本申请的实施例中,视角是在虚拟世界中通过摄 像机模型对虚拟角色进行观察时的角度。可选地,摄像机模型在虚拟世界中对虚拟对象进行自动跟随,即,当虚拟对象在虚拟世界中的位置发生改变时,摄像机模型跟随虚拟对象在虚拟世界中的位置同时发生改变,且该摄像机模型在虚拟世界中始终处于虚拟对象的预设距离范围内。可选地,在自动跟随过程中,摄像头模型和虚拟对象的相对位置不发生变化。本申请实施例以第三人称视角为例进行说明,可选地,摄像机模型位于虚拟对象(比如虚拟人物的头肩部)的后方。
在一种可能的实施方式中,对局界面的控件层中包括用于控制第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值的第一控件,示意性的,该第一控件用于控制第一虚拟对象的基础操作,也可以被称为普通攻击控件。
示意性的,请参考图3,其示出了一种对局界面。对局界面的虚拟世界画面中包含第一虚拟对象301和第二虚拟对象302,以及视野内的建筑、植物、道路等虚拟环境;对局界面的控件层包括第一控件303和其它技能控件304,以及用于控制第一虚拟对象301进行移动和改变方向的方向控件305,用户可以通过点击、拖拽等操作触发此类控件,此外,控件层还包括用于显示虚拟世界的地图控件306、用于显示战绩、对局时长等信息的控件307以及用于游戏设置、语音通话、消息发送等其它功能的UI控件。
步骤202,从至少一个第二虚拟对象中确定出目标虚拟对象,并通过预定方式对目标虚拟对象进行标记。
在一种可能的实施方式中,终端实时搜索第二虚拟对象,将符合预设条件的第二虚拟对象确定为目标虚拟对象,并对目标虚拟对象进行标记。
其中,预设条件可以是:第二虚拟对象符合普通攻击作用对象、第二虚拟对象与第一虚拟对象之间的距离满足预设距离条件(比如,距离第一虚拟对象最近)、第二虚拟对象的属性值满足预设属性值条件(比如,第二虚拟对象的生命值或防御值最低)以及第二虚拟对象所属的目标阵营属于预设阵营等中的至少一种。可选的,预设条件还可以是用户预先设置的条件,比如,设定优先攻击某一类虚拟对象,本申请实施例对选取目标虚拟对象的方式不构成限定。
可选地,对目标虚拟对象进行标记的预定方式包括将目标虚拟对象的人物形象和/或目标属性信息条的边缘进行标亮、将目标虚拟对象携带的信息更换颜色、在目标虚拟对象的人物形象附近(例如正下方)增加特殊标记、显示由第 一虚拟对象指向目标虚拟对象的射线等方式。
示意性的,请参考图4,其示出了一种对目标虚拟对象401进行标记的方式。终端从第二虚拟对象中确定出目标虚拟对象401后,将目标虚拟对象401上方的目标属性信息条402的外边缘添加高亮效果,并在目标虚拟对象401的正下方显示一个定位标记403。
示意性的,请参考图5,对局界面中显示有第一虚拟对象501、第二虚拟对象502a和第二虚拟对象502b,终端从两个第二虚拟对象中确定出第二虚拟对象502a为目标虚拟对象,则通过上述预定方式对第二虚拟对象502a进行标记。
步骤203,接收作用于第一控件的第一触发信号。
在一种可能的实施方式中,第一触发信号是用户对第一控件进行触发操作时生成的,例如,用户可以通过对第一控件进行点击、拖拽等操作触发第一触发信号。
步骤204,响应于第一触发信号符合自动控制条件,控制第一虚拟对象使用虚拟道具改变目标虚拟对象的目标属性值。
在一种可能的实施方式中,为了实现对第一虚拟对象的不同控制操作,终端预先设定有针对不同第一触发信号的控制指令,用户对第一控件实施的不同操作会对应生成不同的第一触发信号,从而控制第一虚拟对象根据相应的控制指令执行相应操作。
在一种可能的实施方式中,终端内预设有自动控制条件,当用户对第一控件的触控操作产生的第一触发信号满足自动控制条件时,表示该第一触发信号对应的控制指令为:对预先标记的目标虚拟对象释放技能,从而实现控制第一虚拟对象使用虚拟道具改变目标虚拟对象的目标属性值。
可选地,终端基于第一触发信号控制第一虚拟对象改变目标虚拟对象的目标属性值,该目标属性值包括第一虚拟对象的剩余健康值(或称为剩余生命值)、剩余能量值、剩余法力值等属性值中的至少一种,例如,第一虚拟对象使用虚拟道具改变目标虚拟对象的目标属性值,可以表现为第一虚拟对象使用虚拟道具攻击目标虚拟对象,从而降低目标虚拟对象的剩余生命值。
综上所述,本申请实施例中,通过预定方式标记对局界面中的目标虚拟对象,在接收到作用于第一控件的符合自动控制条件的第一触发信号时,控制第一虚拟对象改变目标虚拟对象的目标属性值;在控制第一虚拟对象改变目标虚 拟对象的目标属性值之前标记出目标虚拟对象,使用户在未进行操作时,就可以通过标记知悉操作所作用的目标虚拟对象,若目标虚拟对象与期望对象不符,可以通过其他操作更改目标虚拟对象,若目标虚拟对象与期望对象相符,则可以通过第一控件快速执行操作,提高了对虚拟对象的控制效率和控制准确度。
请参考图6,其示出了本申请另一个示例性实施例提供的虚拟对象的控制方法的流程图。本实施例以该方法用于图1所示实施环境中的第一终端110或第二终端130或该实施环境中的其它终端为例进行说明,该方法包括如下步骤:
步骤601,显示对局界面,对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件。
步骤601的实施方式可以参考上述步骤201,本申请实施例在此不再赘述。
步骤602,响应于至少一个第二虚拟对象中不包含主动选择虚拟对象,获取第一虚拟对象的第一对象信息,以及至少一个第二虚拟对象的第二对象信息。
可选地,第一控件具有自动控制第一虚拟对象进行攻击以及主动选取攻击目标进而控制第一虚拟对象进行攻击两种功能,且主动选取的攻击目标的攻击优先级高于自动控制操作的攻击目标的攻击优先级。其中,主动选择虚拟对象是用户通过触发第一控件选取的虚拟对象,对象信息用于表征虚拟对象的状态和位置。示意性的,对象信息中的位置用于指示虚拟对象在虚拟世界中的位置,对象信息中的状态用于指示虚拟对象当前的各个属性值,例如,第一对象信息包括第一虚拟对象在虚拟世界中的坐标和第一虚拟对象的等级、武力值、所使用的虚拟道具,以及剩余能量值、剩余生命值等可能对武力值产生影响的信息中的至少一种,第二对象信息包括第二虚拟对象在虚拟世界中的坐标以及防御值、剩余生命值、剩余能量值等状态信息中的至少一种。
在一种可能的实施方式中,若用户未操作第一控件选取任意第二虚拟对象,对应的,第二虚拟对象中不包含主动选择虚拟对象,终端就需要根据第一对象信息和第二对象信息进行自动搜索,以便从至少一个第二虚拟对象中确定出目标虚拟对象。
由于确定出的目标虚拟对象是预设攻击技能(可以由触发第一控件释放的攻击技能)的释放对象,对应的,需要保证该预设攻击技能可以作用于该目标虚拟对象,或可以对该目标虚拟对象造成预期伤害,而是否可以达到上述技能 释放效果,与第一虚拟对象以及第二虚拟对象的位置信息、剩余生命值、防御值、剩余能量值等具有一定的关系,因此,在一种可能的实施方式中,当终端需要从多个第二虚拟对象中选择目标虚拟对象时,需要首先获取到第一虚拟对象和各个第二虚拟对象的对象信息,以便基于该对象信息从多个第二虚拟对象中选择出最佳的目标虚拟对象。
步骤603,根据第一对象信息和第二对象信息,从至少一个第二虚拟对象中确定出目标虚拟对象。
可选地,若某一第二虚拟对象的第二对象信息满足预设条件时,则终端将该第二虚拟对象确定为目标虚拟对象;若不存在第二对象信息满足预设条件的第二虚拟对象,则不存在目标虚拟对象。示意性的,目标虚拟对象的数量为1。
需要说明的是,本申请仅以目标虚拟对象的数量为1进行示例性说明,并不限制目标虚拟对象的数量,如果存在两个或两个以上目标虚拟对象满足预设条件,对应的目标虚拟对象的数量可以是两个或两个以上。
可选的,选取的目标虚拟对象的数量也可以由预设攻击技能对应的攻击类型确定,比如,预设攻击技能仅能作用于单个虚拟对象,对应的,可以仅选择单个目标虚拟对象,若预设攻击技能可以作用于两个及两个以上的虚拟对象,对应的,终端可以选择两个及两个以上的符合预设条件的目标虚拟对象。
在一种可能的实施方式中,第一对象信息中包含第一位置和第一范围,第二对象信息中包含第二位置,第一位置为第一虚拟对象在虚拟世界中所处的位置,第二位置为第二虚拟对象在虚拟世界中所处的位置,第一范围是虚拟道具的使用范围。
在一种可能的实施方式中,步骤603包括如下步骤一至三:
步骤一,根据第一范围确定第二范围,第二范围大于第一范围。
其中,第二范围是终端搜索目标虚拟对象的范围。在一种可能的实施方式中,为了将第一虚拟对象附近的第二虚拟对象纳入搜索范围,设置第二范围大于第一范围。在一种可能的实施方式中,第一范围和第二范围同为圆形范围,或者同为第一虚拟对象朝向预定方向和预定角度的扇形区域。
示意性的,终端设置第二范围的半径为第一范围的半径+k,比如当第一范围为一个圆形区域时,设置第二范围为相同圆心且半径比第一范围的半径大2米的圆形区域。例如,第一范围是虚拟世界中以第一位置为圆心,以5米为半 径的圆所在的区域,则确定第二范围是虚拟世界中以第一位置为圆心,以7米为半径的圆所在的区域;或者第一范围是虚拟世界中以第一位置为圆心,以5米为半径,角度为45°且处于第一虚拟对象正前方的扇形区域,则确定第二范围是虚拟世界中以第一位置为圆心,以7米为半径的圆形区域。
可选的,由于目标虚拟对象是发动预设攻击技能的技能释放对象,也就是说,目标虚拟对象需要位于预定技能释放范围内,因此,在一种可能的实施方式中,可以基于预设攻击技能的技能释放范围来确定第二范围。示意性的,第二范围可以小于等于技能释放范围。可选的,不同攻击技能可以对应不同技能释放范围,对应的,不同攻击技能可以设定不同第二范围。
步骤二,根据第一位置和第二位置,将位于第二范围内的第二虚拟对象确定为候选虚拟对象。
由于目标虚拟对象首先需要位于第一虚拟对象的可攻击范围之内,对应的,目标虚拟对象需要满足一定的位置条件,即目标虚拟对象对应的第二位置需要位于第二范围内,在一种可能的实施方式中,终端将位于第二范围内的第二虚拟对象确定为候选虚拟对象,再根据其他条件从候选虚拟对象中确定出目标虚拟对象。
示意性的,请参考图7,当前虚拟世界中的虚拟对象包含第一虚拟对象701,第二虚拟对象702a和第二虚拟对象702b,终端根据第一虚拟对象701的第一位置和第一范围703确定出第二范围704,并检测到第二虚拟对象702a位于第二范围704内,则将第二虚拟对象702a确定为候选虚拟对象。
步骤三,将满足选取条件的候选虚拟对象确定为目标虚拟对象,选取条件包括如下至少一项:与第一虚拟对象之间的距离最小、目标属性值最低、属于目标阵营。
在一种可能的实施方式中,终端从候选虚拟对象中确定出一个满足选取条件的作为目标虚拟对象,该目标虚拟对象为终端自动确定的在当前时刻的优先攻击目标,若不存在满足选取条件的候选虚拟对象,则终端确定当前时刻目标虚拟对象不存在。
示意性的,目标阵营包括第二阵营和第三阵营,其中,第二阵营为与第一虚拟对象所属的第一阵营具有敌对关系的阵营,第三阵营为包含服务器控制的野怪等虚拟对象的阵营。在一种可能的实施方式中,若满足选取条件的候选虚 拟对象中包含属于第二阵营的候选虚拟对象和属于第三阵营的候选虚拟对象,则终端优先从属于第二阵营的候选虚拟对象中确定目标虚拟对象,若属于第二阵营的候选虚拟对象中不存在目标虚拟对象,则继续从属于第三阵营的候选虚拟对象中确定目标虚拟对象中确定目标虚拟对象。
由于目标虚拟对象的选取与技能释放效果有关,为了保证技能释放效果,比如,提高技能释放所造成的击杀效率,而击杀效率可能与目标虚拟对象对应的剩余生命值、剩余防御值等有关,比如,候选虚拟对象A的生命值高于候选虚拟对象B,则对两个候选虚拟对象分别释放相同技能操作,显然击杀候选虚拟对象B的概率大于击杀虚拟对象A的概率,因此,为了提高第一虚拟对象的击杀效率,可以将候选虚拟对象B确定为目标虚拟对象,即所选取的目标虚拟对象的目标属性值最低。可选地,目标属性值除了上文列举的剩余生命值之外,候选虚拟对象的剩余能量值、防御值等,也会影响目标攻击的使用效率,对应的,目标属性值可以包括剩余生命值、剩余能量值、防御值等属性值。可选的,由于攻击技能对目标虚拟对象的命中率可能还与目标虚拟对象与第一虚拟对象之间的距离有关,可知距离越远,命中率也会相对降低,因此,为了进一步提高预设攻击技能的命中率,将与第一虚拟对象之间的距离最小也作为选择目标虚拟对象的预设条件之一。
可选的,选取条件还可以包括:预设攻击技能是否可以作用于目标虚拟对象、目标攻击作用于目标虚拟对象是否有概率被无效化等。
示意性的,如图7所示,若第二虚拟对象702a满足选取条件,则确定第二虚拟对象702a为目标虚拟对象;若第二虚拟对象702a不满足选取条件,则确定当前不存在目标虚拟对象。
步骤604,通过第一预定方式对目标虚拟对象进行标记。
可选地,第一预定方式包括将目标虚拟对象的人物形象和/或目标属性信息条的边缘进行标亮、将目标虚拟对象携带的信息更换颜色、在目标虚拟对象的人物形象附近(例如正下方)增加特殊标记、显示由第一虚拟对象指向目标虚拟对象的射线等方式。
步骤605,响应于至少一个第二虚拟对象中包含主动选择虚拟对象,将主动选择虚拟对象确定为目标虚拟对象。
在一种可能的实施方式中,主动选择虚拟对象是用户通过第一控件选择的 下一次攻击时的攻击对象,可以与终端自动搜索确定的目标虚拟对象相同,也可能与终端自动搜索确定的目标虚拟对象不同,并且主动选择虚拟对象的攻击优先级高于终端自动搜索确定的目标虚拟对象的攻击优先级,因此当第二虚拟对象中包含主动选择虚拟对象时,终端直接将主动选择虚拟对象确定为目标虚拟对象,不进行确定候选虚拟对象的过程。
示意性的,如图7所示,若第二虚拟对象702b为主动选择虚拟对象,则终端直接将第二虚拟对象702b确定为目标虚拟对象。
步骤606,通过第二预定方式对目标虚拟对象进行标记。
可选地,第二预定方式与第一预定方式相同,或第二预定方式与第一预定方式不同,第二预定方式下标记效果的显著程度高于第一预定方式下标记效果的显著程度。
示意性的,第二预定方式与第一预定方式的标记方式不同,例如第二预定方式采用与第一预定方式不同的标记位置,或标记位置相同但使用不同的颜色进行,比如,第一预定方式为将目标虚拟对象的目标属性信息条的边缘进行标亮,而第二预定方式为在目标虚拟对象的人物形象正下方增加定位标记。
步骤607,接收作用于第一控件的第一触发信号。
步骤607的实施方式可以参考上述步骤203,本申请实施例在此不再赘述。
步骤608,响应于第一触发信号对应的触控结束位置位于第一自动控制区域,确定第一触发信号符合自动控制条件。
在一种可能的实施方式中,第一控件包括第一自动控制区域和第一主动控制区域,第一自动控制区域和第一主动控制区域之间不存在交集,其中,第一自动控制区域内的触发操作用于触发快速攻击目标虚拟对象;而第一主动控制区域内的触发操作用于触发用户自主选择目标虚拟对象。
可选的,第一控件为圆形控件,第一自动控制区域为圆形,主动控制区域为第一自动控制区域周侧的环形区域;或者第一自动控制区域为第一控件的左侧半圆形区域,主动控制区域为第一控件的右侧半圆形区域,本申请实施例对此不作限定。示意性的,请参考图8,其示出了一种第一控件示意图。该第一控件为圆形控件,其中,第一自动控制区域801为以第一控件的圆心为圆心,半径小于第一控件半径的圆形区域,第一主动控制区域802为第一控件中除第一自动控制区域801以外的环形部分。
示意性的,如图8左侧所示,当用户用手指在第一自动控制区域801部分点击第一控件,并且触控结束位置803a位于第一自动控制区域801时,终端确定第一触发信号符合自动控制条件。即用户通过快速点击第一控件,就能够控制第一虚拟对象改变目标虚拟对象的目标属性值。
步骤609,响应于第一触发信号符合自动控制条件,控制第一虚拟对象使用虚拟道具改变目标虚拟对象的目标属性值。
在一种可能的实施方式中,终端根据触控结束位置确定第一触发信号是否满足自动控制条件,当触控结束位置位于第一自动控制区域时,确定满足自动控制条件,当触控结束位置位于第一主动控制区域或第一控件以外的区域时,确定不满足自动控制条件。
示意性的,请参考图9,第一控件包括第一自动控制区域901和第一主动控制区域902,用户点击第一控件,并且触控结束位置903位于第一自动控制区域901,即在图中所示的触控结束位置903抬起手指,终端确定第一触发信号符合自动控制条件。
步骤610,响应于第一触发信号对应的触控结束位置位于第一主动控制区域,确定第一触发信号符合主动控制条件。
在一种可能的实施方式中,用户可以通过对第一控件的触控操作主动选择目标虚拟对象,对应的,终端获取到第一触发信号的触控结束位置,并确定该触控结束为位置位于第一主动控制区域,确定第一触发信号符合主动控制条件,从而基于该触控结束位置确定用户需要选择的目标虚拟对象。
可选地,用户可以用手指按住第一控件同时进行拖拽,根据第一虚拟对象和预期攻击对象的位置,将手指停在第一主动控制区域的相应位置,完成主动选择虚拟对象的选择过程。
示意性的,请参考图8右侧,当用户用手指点击第一控件,并且拖动手指使触控结束位置803b位于第一主动控制区域802时,终端确定第一触发信号符合主动控制条件。
示意性的,请参考图10,第一控件包括第一自动控制区域1001和第二自动控制区域1002,当用户用手指点击第一控件,并且拖动手指使触控结束位置1003位于第二主动控制区域1002时,终端确定第一触发信号符合主动控制条件。
步骤611,将触控结束位置映射的第二虚拟对象确定为主动选择虚拟对象。
在一种可能的实施方式中,终端实时根据用户手指在第二自动控制区域的位置确定虚拟世界的映射位置,并根据触控结束位置映射的第二虚拟对象确定主动选择虚拟对象。为了方便用户快速确定主动选择虚拟对象,终端可以在对局界面中以射线、扇形区域等标记方式标记出触控结束位置在虚拟世界中的映射位置,当用户的触控操作在虚拟世界中对应瞄准某一第二虚拟对象,或触控操作映射的位置附近存在某一第二虚拟对象时,将该第二虚拟对象确定为主动选择虚拟对象,并执行步骤606,对主动选择虚拟对象进行标记。
可选地,用户选择主动选择对象的范围为对局界面包含的虚拟世界的范围。
示意性的,如图10所示,终端将第一控件中心点1004映射为第一虚拟对象1005的位置,确定出触控结束位置1003在对局界面中的映射位置,并在对局界面中显示两个映射位置之间的连线,当触控结束位置1003映射在第二虚拟对象1006所在位置时,确定第二虚拟对象1006为主动选择虚拟对象,并对第二虚拟对象1006进行标记。
步骤612,控制第一虚拟对象使用虚拟道具改变主动选择虚拟对象的目标属性值。
在一种可能的实施方式中,当用户在触控结束位置抬手时,终端控制第一虚拟对象使用虚拟道具改变主动选择虚拟对象的目标属性值。
可选地,当终端控制第一虚拟对象完成对主动选择虚拟对象改变目标属性值的操作后,终端仍然保持对主动选择虚拟对象的标记,并将该主动选择虚拟对象确定为目标虚拟对象,当下一次接收到符合自动控制条件的第一触摸信号时,控制第一虚拟对象改变该主动选择虚拟对象的目标属性值。
示意性的,若主动选择虚拟对象位于第一虚拟对象使用虚拟道具的范围之外,则终端控制第一虚拟对象向主动选择对象的方向前进,并在主动选择虚拟对象位于第一虚拟对象使用虚拟道具的范围内时,控制第一虚拟对象使用虚拟道具改变主动选择虚拟对象的目标属性值。
本申请实施例中,通过对第一控件进行区域划分,使得用户可以主动选择虚拟对象,当存在主动选择虚拟对象时,直接将主动选择虚拟对象确定为目标虚拟对象并进行标记,当不存在主动选择虚拟对象时,将符合预设条件的第二虚拟对象确定为目标虚拟对象并进行标记,用户可以通过快速操作控制第一虚拟对象改变目标虚拟对象的目标属性值;在用户对第一控件进行触控操作之前 标记出目标虚拟对象,使得用户能够提前掌握改变目标属性值操作的对象,若目标虚拟对象与用户预想的虚拟对象不同,则用户可以通过控制第一虚拟对象进行移动或瞄准等操作选择其他虚拟对象,避免了由于目标虚拟对象与预期的虚拟对象不同导致用户操作未达到预期效果,从而需要重新调整操作的情况,提高了操作效率。
在一种可能的实施方式中,除第一控件之外,对局界面的控件层中还包括其他用于控制第一虚拟对象对目标虚拟对象释放目标技能的控件,在图2的基础上,请参考图11,上述步骤202之后,虚拟对象的控制方法还包括如下步骤:
步骤205,响应于接收到作用于第二控件的第二触发信号,控制第一虚拟对象对目标虚拟对象释放目标技能。
在一种可能的实施方式中,对局界面中还包含第二控件,第二控件用于控制第一虚拟对象向其它虚拟对象释放目标技能。
示意性的,如图3所示,对局界面中包含至少一个用于控制第一虚拟对象向其它虚拟对象释放目标技能的第二控件304。
在一种可能的实施方式中,第二控件包括第二自动控制区域和第二主动控制区域,第二自动控制区域和第二主动控制区域之间不存在交集,步骤205包括如下步骤一和步骤二:
步骤一,响应于第二触发信号对应的触控结束位置位于第二自动控制区域,获取目标技能的技能释放规则。
由于第一虚拟对象的技能多种多样,部分技能并不适用于目标虚拟对象,例如,用户为同一阵营的虚拟对象施加特殊效果的技能,或者需要用户主动瞄准第二虚拟对象的技能,因此在一种可能的实施方式中,当第二触发信号对应的触控结束位置位于第二自动控制区域,终端首先获取目标技能的技能释放规则,该技能释放规则包括技能释放目标的类型、技能释放范围、技能要求的操作等。
示意性的,请参考图12,第二控件1200包括第二自动控制区域1201和第二主动控制区域1202,第二触发信号对应的触控结束位置1203位于第二自动控制区域1201时,终端获取技能3的技能释放规则。
步骤二,响应于目标虚拟对象符合技能释放规则,控制第一虚拟对象对目 标虚拟对象释放目标技能。
其中,目标虚拟对象是终端自动搜索确定的目标虚拟对象,或用户通过触发第一控件选择的主动选择对象。
可选的,为了提高技能释放效率,终端可以基于技能释放规则来选择目标虚拟对象。
示意性的,如图12所示,当目标虚拟对象1205符合技能3的技能释放规则时,终端控制第一虚拟对象1204对目标虚拟对象1205释放技能3。若目标虚拟对象1205位于技能3的释放范围之外,则终端控制第一虚拟对象1204向目标虚拟对象1205的方向前进,当目标虚拟对象1205位于技能3的释放范围内时,控制第一虚拟对象1204释放技能3。
步骤206,响应于目标虚拟对象的目标属性值达到属性值阈值,重新从至少一个第二虚拟对象中确定目标虚拟对象。
其中,目标虚拟对象的目标属性值达到属性值阈值包括如下情况中的至少一种:目标虚拟对象的剩余生命值达到生命值阈值,例如生命值阈值为0,当目标虚拟对象的剩余生命值为0时,满足目标属性值达到属性值阈值;目标虚拟对象的位置位于对局界面所显示的范围之外。
在一种可能的实施方式中,以目标属性值为剩余生命值为例,若目标虚拟对象对应的剩余生命值为0,表示目标虚拟对象被击杀,无法继续作为目标虚拟对象,终端需要重新获取第一虚拟对象和剩余第二虚拟对象对应的对象信息,以便基于该对象信息从第二虚拟对象中确定出目标虚拟对象。
本申请实施例中,通过对第二控件划分第二主动控制区域和第二自动控制区域,使得用户可以通过快速操作控制第一虚拟对象对目标虚拟对象释放目标技能,简化了部分技能的操作步骤,节省了用户的操作时长。
在一种可能的实施方式中,MOBA游戏包括表现层和逻辑层,请参考图13,其示出了本申请另一个示例性实施例提供的虚拟对象的控制方法的流程图。本实施例以该方法用于图1所示实施环境中的第一终端110或第二终端130或该实施环境中的其它终端为例进行说明,该方法包括如下步骤:
步骤1301,表现层获取触控操作的结束位置。
当用户对第一控件或第二控件进行触控操作时,表现层实时获取该触控操 作,并在检测到用户抬手时获取触控操作的结束位置。
步骤1302,表现层判断目标虚拟对象是否满足技能释放规则。
表现层获取触控操作对应的目标技能,该目标技能包括第一控件对应的基础技能和第二控件对应的特殊技能。根据目标技能的类型和技能释放规则,判断目标虚拟对象是否满足技能释放规则,当目标虚拟对象满足技能释放规则时,继续执行步骤1303。
步骤1303,当目标虚拟对象满足技能释放规则时,表现层向逻辑层发送技能释放信息。
表现层判断目标虚拟对象满足技能释放规则后,还需要逻辑层进一步判断,从而避免由于画面延迟或用户的作弊行为导致表现层判断错误。
步骤1304,逻辑层判断目标虚拟对象是否满足技能释放规则。
步骤1305,逻辑层向表现层发送判断结果。
逻辑层向表现层发送判断结果,若该结果指示目标虚拟对象不满足技能释放规则,则不执行后续步骤,对局界面中的显示结果为用户触发控件之后无响应,不释放目标技能。
步骤1306,当逻辑层判断结果为允许技能释放时,表现层向服务器发送技能释放请求。
步骤1307,服务器转发技能释放请求。
服务器接收终端表现层发送的技能释放请求,并根据技能释放请求中的技能释放信息获取目标终端,并向目标终端的逻辑层转发技能释放请求,该目标终端为参与当前对局的所有终端。
步骤1308,逻辑层进行技能释放计算处理。
当逻辑层接收到服务器转发的技能释放请求时,确定执行技能释放的操作,并进行技能释放计算处理,得到技能释放结果,例如目标虚拟对象在技能释放后的目标属性值。
步骤1309,逻辑层发送技能释放指令。
逻辑层计算结束后向表现层发送技能释放指令。
步骤1310,表现层技能释放表现。
表现层根据逻辑层的技能释放指令,在对局界面中渲染技能释放效果。
图14是本申请一个示例性实施例提供的虚拟对象的控制装置的结构框图,该装置可以设置于图1所示实施环境中的第一终端110或第二终端130或该实施环境中的其它终端,该装置包括:
显示模块1401,用于显示对局界面,所述对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件,所述第一虚拟对象和所述第二虚拟对象位于虚拟世界中,且所述第一虚拟对象和所述第二虚拟对象属于不同阵营,所述第一控件用于控制所述第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值;
第一确定模块1402,用于从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记;
接收模块1403,用于接收作用于所述第一控件的第一触发信号;
第一控制模块1404,用于响应于所述第一触发信号符合自动控制条件,控制所述第一虚拟对象使用虚拟道具改变所述目标虚拟对象的所述目标属性值。
可选地,所述第一确定模块1402,包括:
第一获取单元,用于响应于至少一个所述第二虚拟对象中不包含主动选择虚拟对象,获取所述第一虚拟对象的第一对象信息,以及至少一个所述第二虚拟对象的第二对象信息,其中,所述主动选择虚拟对象是通过所述第一控件选取的虚拟对象,对象信息用于表征虚拟对象的状态和位置;
第一确定单元,用于根据所述第一对象信息和所述第二对象信息,从至少一个所述第二虚拟对象中确定出所述目标虚拟对象;
第一标记单元,用于通过第一预定方式对所述目标虚拟对象进行标记。
可选地,所述第一对象信息中包含第一位置和第一范围,所述第二对象信息中包含第二位置,所述第一位置为所述第一虚拟对象在所述虚拟世界中所处的位置,所述第二位置为所述第二虚拟对象在所述虚拟世界中所处的位置,所述第一范围是所述虚拟道具的使用范围;
所述第一确定单元,还用于:
根据所述第一范围确定第二范围,所述第二范围大于所述第一范围;
根据所述第一位置和所述第二位置,将位于所述第二范围内的所述第二虚拟对象确定为候选虚拟对象;
将满足选取条件的所述候选虚拟对象确定为所述目标虚拟对象,所述选取 条件包括如下至少一项:与所述第一虚拟对象之间的距离最小、所述目标属性值最低、属于目标阵营。
可选地,所述第一确定模块1402,包括:
第二确定单元,用于响应于至少一个所述第二虚拟对象中包含所述主动选择虚拟对象,将所述主动选择虚拟对象确定为所述目标虚拟对象;
第二标记单元,用于通过第二预定方式对所述目标虚拟对象进行标记。
可选地,所述第一控件包括第一自动控制区域和第一主动控制区域,所述第一自动控制区域和所述第一主动控制区域之间不存在交集;
所述装置还包括:
第二确定模块,用于响应于所述第一触发信号对应的触控结束位置位于所述第一自动控制区域,确定所述第一触发信号符合所述自动控制条件。
可选地,所述装置还包括:
第三确定模块,用于响应于所述第一触发信号对应的触控结束位置位于所述第一主动控制区域,确定所述第一触发信号符合主动控制条件;
第四确定模块,用于将所述触控结束位置映射的所述第二虚拟对象确定为主动选择虚拟对象;
第二控制模块,用于控制所述第一虚拟对象使用虚拟道具改变所述主动选择虚拟对象的所述目标属性值。
可选地,所述对局界面中还包含第二控件,所述第二控件用于控制所述第一虚拟对象向其它虚拟对象释放目标技能;
所述装置还包括:
第三控制模块,用于响应于接收到作用于所述第二控件的第二触发信号,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
可选地,所述第二控件包括第二自动控制区域和第二主动控制区域,所述第二自动控制区域和所述第二主动控制区域之间不存在交集;
所述第三控制模块,包括:
第二获取单元,用于响应于所述第二触发信号对应的触控结束位置位于所述第二自动控制区域,获取所述目标技能的技能释放规则;
控制单元,用于响应于所述目标虚拟对象符合所述技能释放规则,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
可选地,所述装置还包括:
第五确定模块,用于响应于所述目标虚拟对象的所述目标属性值达到属性值阈值,重新从至少一个所述第二虚拟对象中确定所述目标虚拟对象。
可选地,所述目标虚拟对象的所述目标属性值达到所述属性值阈值包括:所述目标虚拟对象的剩余生命值达到生命值阈值,所述目标虚拟对象的位置位于所述对局界面所显示的范围之外。
综上所述,本申请实施例提供的虚拟对象的控制装置,通过预定方式标记对局界面中的目标虚拟对象,在接收到作用于第一控件的符合自动控制条件的第一触发信号时,控制第一虚拟对象改变目标虚拟对象的目标属性值;在控制第一虚拟对象改变目标虚拟对象的目标属性值之前标记出目标虚拟对象,使用户在未进行操作时,就可以通过标记知悉操作所作用的目标虚拟对象,若目标虚拟对象与期望对象不符,可以通过其他操作更改目标虚拟对象,若目标虚拟对象与期望对象相符,则可以通过第一控件快速执行操作,提高了对虚拟对象的控制效率和控制准确度;同时可以避免在执行操作过程中还需要确认和标记操作对象,从而可以降低操作执行时延,进而提高人机交互效率。
请参考图15,其示出了本申请一个实施例提供的终端的结构框图。通常,终端1500包括有:处理器1501和存储器1502。
处理器1501可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1501可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1501也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1501可以集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。在一些实施例中,处理器1501还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1502可以包括一个或多个计算机可读存储介质,该计算机可读存储 介质可以是非暂态的。存储器1502还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1502中的非暂态的计算机可读存储介质用于存储至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集用于被处理器1501所执行以实现本申请中方法实施例提供的虚拟对象的控制方法。
在一些实施例中,终端1500还可选包括有:外围设备接口1503和至少一个外围设备。处理器1501、存储器1502和外围设备接口1503之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1503相连。具体地,外围设备可以包括:通信接口1504、显示屏1505、音频电路1506、摄像头组件1507、定位组件1508和电源1509中的至少一种。
本领域技术人员可以理解,图15中示出的结构并不构成对终端1500的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
请参考图16,其示出了本申请一个实施例提供的服务器的结构示意图。具体来讲:
所述服务器1600包括中央处理器(Central Processing Unit,CPU)1601、包括随机存取存储器(Random Access Memory,RAM)1602和只读存储器(Read Only Memory,ROM)1603的系统存储器1604,以及连接系统存储器1604和中央处理单元1601的系统总线1605。所述服务器1600还包括帮助计算机内的各个器件之间传输信息的基本输入/输出(Input/Output,I/O)系统1606,和用于存储操作系统1613、应用程序1614和其他程序模块1615的大容量存储设备1607。
所述基本输入/输出系统1606包括有用于显示信息的显示器1608和用于用户输入信息的诸如鼠标、键盘之类的输入设备1609。其中所述显示器1608和输入设备1609都通过连接到系统总线1605的输入输出控制器1610连接到中央处理单元1601。所述基本输入/输出系统1606还可以包括输入输出控制器1610以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器1610还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备1607通过连接到系统总线1605的大容量存储控制器(未示出)连接到中央处理单元1601。所述大容量存储设备1607及其相关联的计算机可读介质为服务器1600提供非易失性存储。也就是说,所述大容量存储设备1607可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存或其他固态存储其技术,CD-ROM、数字视频光盘(Digital Video Disc,DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器1604和大容量存储设备1607可以统称为存储器。
根据本申请的各种实施例,所述服务器1600还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即服务器1600可以通过连接在所述系统总线1605上的网络接口单元1611连接到网络1612,或者说,也可以使用网络接口单元1611来连接到其他类型的网络或远程计算机系统(未示出)。
所述存储器还包括至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集存储于存储器中,且经配置以由一个或者一个以上处理器执行,以实现上述虚拟对象的控制方法。
在示例性实施例中,还提供了一种计算机设备。该计算机设备可以是终端或服务器。所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述虚拟对象的控制方法。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如上 各个实施例所述的虚拟对象的控制方法。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面的各种可选实现方式中提供的虚拟对象的控制方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。计算机可读存储介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种虚拟对象的控制方法,其特征在于,所述方法应用于终端,所述方法包括:
    显示对局界面,所述对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件,所述第一虚拟对象和所述第二虚拟对象位于虚拟世界中,且所述第一虚拟对象和所述第二虚拟对象属于不同阵营,所述第一控件用于控制所述第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性值;
    从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记;
    接收作用于所述第一控件的第一触发信号;
    响应于所述第一触发信号符合自动控制条件,控制所述第一虚拟对象使用虚拟道具改变所述目标虚拟对象的所述目标属性值。
  2. 根据权利要求1所述的方法,其特征在于,所述从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记,包括:
    响应于至少一个所述第二虚拟对象中不包含主动选择虚拟对象,获取所述第一虚拟对象的第一对象信息,以及至少一个所述第二虚拟对象的第二对象信息,其中,所述主动选择虚拟对象是通过所述第一控件选取的虚拟对象,对象信息用于表征虚拟对象的状态和位置;
    根据所述第一对象信息和所述第二对象信息,从至少一个所述第二虚拟对象中确定出所述目标虚拟对象;
    通过第一预定方式对所述目标虚拟对象进行标记。
  3. 根据权利要求2所述的方法,其特征在于,所述第一对象信息中包含第一位置和第一范围,所述第二对象信息中包含第二位置,所述第一位置为所述第一虚拟对象在所述虚拟世界中所处的位置,所述第二位置为所述第二虚拟对象在所述虚拟世界中所处的位置,所述第一范围是所述虚拟道具的使用范围;
    所述根据所述第一对象信息和所述第二对象信息,从至少一个所述第二虚拟对象中确定出所述目标虚拟对象,包括:
    根据所述第一范围确定第二范围,所述第二范围大于所述第一范围;
    根据所述第一位置和所述第二位置,将位于所述第二范围内的所述第二虚拟对象确定为候选虚拟对象;
    将满足选取条件的所述候选虚拟对象确定为所述目标虚拟对象,所述选取条件包括如下至少一项:与所述第一虚拟对象之间的距离最小、所述目标属性值最低、属于目标阵营。
  4. 根据权利要求2所述的方法,其特征在于,所述从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记,包括:
    响应于至少一个所述第二虚拟对象中包含所述主动选择虚拟对象,将所述主动选择虚拟对象确定为所述目标虚拟对象;
    通过第二预定方式对所述目标虚拟对象进行标记。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述第一控件包括第一自动控制区域和第一主动控制区域,所述第一自动控制区域和所述第一主动控制区域之间不存在交集;
    所述接收作用于所述第一控件的第一触发信号之后,所述方法包括:
    响应于所述第一触发信号对应的触控结束位置位于所述第一自动控制区域,确定所述第一触发信号符合所述自动控制条件。
  6. 根据权利要求5所述的方法,其特征在于,所述接收作用于所述第一控件的第一触发信号之后,所述方法还包括:
    响应于所述第一触发信号对应的触控结束位置位于所述第一主动控制区域,确定所述第一触发信号符合主动控制条件;
    将所述触控结束位置映射的所述第二虚拟对象确定为主动选择虚拟对象;
    控制所述第一虚拟对象使用虚拟道具改变所述主动选择虚拟对象的所述目标属性值。
  7. 根据权利要求1至4任一所述的方法,其特征在于,所述对局界面中还包 含第二控件,所述第二控件用于控制所述第一虚拟对象向其它虚拟对象释放目标技能;
    所述从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记之后,所述方法还包括:
    响应于接收到作用于所述第二控件的第二触发信号,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
  8. 根据权利要求7所述的方法,其特征在于,所述第二控件包括第二自动控制区域和第二主动控制区域,所述第二自动控制区域和所述第二主动控制区域之间不存在交集;
    所述响应于接收到作用于所述第二控件的第二触发信号,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能,包括:
    响应于所述第二触发信号对应的触控结束位置位于所述第二自动控制区域,获取所述目标技能的技能释放规则;
    响应于所述目标虚拟对象符合所述技能释放规则,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
  9. 根据权利要求1至4任一所述的方法,其特征在于,所述方法还包括:
    响应于所述目标虚拟对象的所述目标属性值达到属性值阈值,重新从至少一个所述第二虚拟对象中确定所述目标虚拟对象。
  10. 根据权利要求9所述的方法,其特征在于,所述目标虚拟对象的所述目标属性值达到所述属性值阈值包括:所述目标虚拟对象的剩余生命值达到生命值阈值,所述目标虚拟对象的位置位于所述对局界面所显示的范围之外。
  11. 一种虚拟对象的控制装置,其特征在于,所述装置包括:
    显示模块,用于显示对局界面,所述对局界面中包含第一虚拟对象、至少一个第二虚拟对象以及第一控件,所述第一虚拟对象和所述第二虚拟对象位于虚拟世界中,且所述第一虚拟对象和所述第二虚拟对象属于不同阵营,所述第一控件用于控制所述第一虚拟对象使用虚拟道具改变其它虚拟对象的目标属性 值;
    第一确定模块,用于从至少一个所述第二虚拟对象中确定出目标虚拟对象,并通过预定方式对所述目标虚拟对象进行标记;
    接收模块,用于接收作用于所述第一控件的第一触发信号;
    第一控制模块,用于响应于所述第一触发信号符合自动控制条件,控制所述第一虚拟对象使用虚拟道具改变所述目标虚拟对象的所述目标属性值。
  12. 根据权利要求11所述的装置,其特征在于,所述第一确定模块,包括:
    第一获取单元,用于响应于至少一个所述第二虚拟对象中不包含主动选择虚拟对象,获取所述第一虚拟对象的第一对象信息,以及至少一个所述第二虚拟对象的第二对象信息,其中,所述主动选择虚拟对象是通过所述第一控件选取的虚拟对象,对象信息用于表征虚拟对象的状态和位置;
    第一确定单元,用于根据所述第一对象信息和所述第二对象信息,从至少一个所述第二虚拟对象中确定出所述目标虚拟对象;
    第一标记单元,用于通过第一预定方式对所述目标虚拟对象进行标记。
  13. 根据权利要求12所述的装置,其特征在于,所述第一对象信息中包含第一位置和第一范围,所述第二对象信息中包含第二位置,所述第一位置为所述第一虚拟对象在所述虚拟世界中所处的位置,所述第二位置为所述第二虚拟对象在所述虚拟世界中所处的位置,所述第一范围是所述虚拟道具的使用范围;
    所述第一确定单元,还用于:
    根据所述第一范围确定第二范围,所述第二范围大于所述第一范围;
    根据所述第一位置和所述第二位置,将位于所述第二范围内的所述第二虚拟对象确定为候选虚拟对象;
    将满足选取条件的所述候选虚拟对象确定为所述目标虚拟对象,所述选取条件包括如下至少一项:与所述第一虚拟对象之间的距离最小、所述目标属性值最低、属于目标阵营。
  14. 根据权利要求12所述的装置,其特征在于,所述第一确定模块,包括:
    第二确定单元,用于响应于至少一个所述第二虚拟对象中包含所述主动选 择虚拟对象,将所述主动选择虚拟对象确定为所述目标虚拟对象;
    第二标记单元,用于通过第二预定方式对所述目标虚拟对象进行标记。
  15. 根据权利要求11至14任一所述的装置,其特征在于,所述第一控件包括第一自动控制区域和第一主动控制区域,所述第一自动控制区域和所述第一主动控制区域之间不存在交集;
    所述装置还包括:
    第二确定模块,用于响应于所述第一触发信号对应的触控结束位置位于所述第一自动控制区域,确定所述第一触发信号符合所述自动控制条件。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:
    第三确定模块,用于响应于所述第一触发信号对应的触控结束位置位于所述第一主动控制区域,确定所述第一触发信号符合主动控制条件;
    第四确定模块,用于将所述触控结束位置映射的所述第二虚拟对象确定为主动选择虚拟对象;
    第二控制模块,用于控制所述第一虚拟对象使用虚拟道具改变所述主动选择虚拟对象的所述目标属性值。
  17. 根据权利要求11至14任一所述的装置,其特征在于,所述对局界面中还包含第二控件,所述第二控件用于控制所述第一虚拟对象向其它虚拟对象释放目标技能;
    所述装置还包括:
    第三控制模块,用于响应于接收到作用于所述第二控件的第二触发信号,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
  18. 根据权利要求17所述的装置,其特征在于,所述第二控件包括第二自动控制区域和第二主动控制区域,所述第二自动控制区域和所述第二主动控制区域之间不存在交集;
    所述第三控制模块,包括:
    第二获取单元,用于响应于所述第二触发信号对应的触控结束位置位于所 述第二自动控制区域,获取所述目标技能的技能释放规则;
    控制单元,用于响应于所述目标虚拟对象符合所述技能释放规则,控制所述第一虚拟对象对所述目标虚拟对象释放所述目标技能。
  19. 根据权利要求11至14任一所述的装置,其特征在于,所述装置还包括:
    第五确定模块,用于响应于所述目标虚拟对象的所述目标属性值达到属性值阈值,重新从至少一个所述第二虚拟对象中确定所述目标虚拟对象。
  20. 根据权利要求19所述的装置,其特征在于,所述目标虚拟对象的所述目标属性值达到所述属性值阈值包括:所述目标虚拟对象的剩余生命值达到生命值阈值,所述目标虚拟对象的位置位于所述对局界面所显示的范围之外。
  21. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至10任一项所述的虚拟对象的控制方法。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如权利要求1至10任一项所述的虚拟对象的控制方法。
PCT/CN2021/079592 2020-04-23 2021-03-08 虚拟对象的控制方法、装置、设备及存储介质 WO2021213026A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
AU2021240132A AU2021240132A1 (en) 2020-04-23 2021-03-08 Virtual object control method and apparatus, device, and storage medium
KR1020217035074A KR20210143300A (ko) 2020-04-23 2021-03-08 가상 객체 제어 방법 및 장치, 디바이스, 및 저장 매체
EP21778333.1A EP3936207A4 (en) 2020-04-23 2021-03-08 METHOD AND APPARATUS FOR CONTROLLING VIRTUAL OBJECT, DEVICE AND STORAGE MEDIA
CA3133467A CA3133467A1 (en) 2020-04-23 2021-03-08 Virtual object control method and apparatus, device, and storage medium
SG11202111219TA SG11202111219TA (en) 2020-04-23 2021-03-08 Virtual object control method and apparatus, device, and storage medium
JP2021566600A JP7476235B2 (ja) 2020-04-23 2021-03-08 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム
US17/530,382 US20220072428A1 (en) 2020-04-23 2021-11-18 Virtual object control method and apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010328506.0 2020-04-23
CN202010328506.0A CN111589126B (zh) 2020-04-23 2020-04-23 虚拟对象的控制方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/530,382 Continuation US20220072428A1 (en) 2020-04-23 2021-11-18 Virtual object control method and apparatus, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021213026A1 true WO2021213026A1 (zh) 2021-10-28

Family

ID=72183258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079592 WO2021213026A1 (zh) 2020-04-23 2021-03-08 虚拟对象的控制方法、装置、设备及存储介质

Country Status (8)

Country Link
US (1) US20220072428A1 (zh)
EP (1) EP3936207A4 (zh)
KR (1) KR20210143300A (zh)
CN (1) CN111589126B (zh)
AU (1) AU2021240132A1 (zh)
CA (1) CA3133467A1 (zh)
SG (1) SG11202111219TA (zh)
WO (1) WO2021213026A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111249730B (zh) * 2020-01-15 2021-08-24 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及可读存储介质
CN111589126B (zh) * 2020-04-23 2023-07-04 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质
US11731037B2 (en) * 2020-09-11 2023-08-22 Riot Games, Inc. Rapid target selection with priority zones
CN112121428B (zh) * 2020-09-18 2023-03-24 腾讯科技(深圳)有限公司 虚拟角色对象的控制方法和装置及存储介质
CN112516583A (zh) * 2020-12-11 2021-03-19 网易(杭州)网络有限公司 游戏中的数据处理方法、装置以及电子终端
CN112546627B (zh) * 2020-12-22 2024-04-09 网易(杭州)网络有限公司 路线指引方法、装置、存储介质及计算机设备
CN112494955B (zh) * 2020-12-22 2023-10-03 腾讯科技(深圳)有限公司 虚拟对象的技能释放方法、装置、终端及存储介质
CN113797536B (zh) * 2021-10-08 2023-06-23 腾讯科技(深圳)有限公司 虚拟场景中对象的控制方法、装置、设备及存储介质
CN114185434A (zh) * 2021-12-09 2022-03-15 连尚(新昌)网络科技有限公司 针对虚拟对象的信息处理方法及装置
CN114870393A (zh) * 2022-04-14 2022-08-09 北京字跳网络技术有限公司 一种技能释放方法、装置、计算机设备及存储介质
CN114860148B (zh) * 2022-04-19 2024-01-16 北京字跳网络技术有限公司 一种交互方法、装置、计算机设备及存储介质
CN117618919A (zh) * 2022-08-12 2024-03-01 腾讯科技(成都)有限公司 虚拟道具的改造处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100009733A1 (en) * 2008-07-13 2010-01-14 Sony Computer Entertainment America Inc. Game aim assist
CN107398071A (zh) * 2017-07-19 2017-11-28 网易(杭州)网络有限公司 游戏目标选择方法及装置
WO2019044131A1 (ja) * 2017-09-04 2019-03-07 株式会社バンダイ ゲーム装置、プログラム及びゲームシステム
CN110064193A (zh) * 2019-04-29 2019-07-30 网易(杭州)网络有限公司 游戏中虚拟对象的操控控制方法、装置和移动终端
CN110413171A (zh) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN110448891A (zh) * 2019-08-08 2019-11-15 腾讯科技(深圳)有限公司 控制虚拟对象操作远程虚拟道具的方法、装置及存储介质
CN110743168A (zh) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 虚拟场景中的虚拟对象控制方法、计算机设备及存储介质
CN111589126A (zh) * 2020-04-23 2020-08-28 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3685836B2 (ja) * 1995-02-28 2005-08-24 株式会社ナムコ 3次元シューティングゲーム装置
JP2001149640A (ja) * 1999-09-16 2001-06-05 Sega Corp ゲーム機およびゲーム処理方法並びにプログラムを記録した記録媒体
US10659288B2 (en) * 2013-02-21 2020-05-19 Gree, Inc. Method for controlling server device, recording medium, server device, terminal device, and system
US9205337B2 (en) * 2013-03-04 2015-12-08 Gree, Inc. Server device, method for controlling the same, computer readable recording medium, and game system
JP6661275B2 (ja) * 2015-03-05 2020-03-11 株式会社バンダイナムコエンターテインメント プログラムおよびサーバシステム
CN104915117B (zh) * 2015-06-16 2017-03-22 深圳市腾讯计算机系统有限公司 控制与虚拟目标进行交互的方法和装置
JP6632819B2 (ja) * 2015-06-30 2020-01-22 株式会社バンダイナムコエンターテインメント プログラム、ゲーム装置及びサーバシステム
US20170072317A1 (en) * 2015-09-16 2017-03-16 Gree, Inc. Non-transitory computer readable medium, method of controlling a game, and information processing device
JP5911632B1 (ja) * 2015-10-05 2016-04-27 グリー株式会社 プログラム、ゲームの制御方法、及び情報処理装置
JPWO2018225163A1 (ja) * 2017-06-06 2019-06-27 株式会社スクウェア・エニックス ビデオゲーム処理プログラム、及びビデオゲーム処理システム
CN107583271B (zh) * 2017-08-22 2020-05-22 网易(杭州)网络有限公司 在游戏中选择目标的交互方法和装置
CN107837529B (zh) * 2017-11-15 2019-08-27 腾讯科技(上海)有限公司 一种对象选择方法、装置、终端和存储介质
CN108310771A (zh) * 2018-01-16 2018-07-24 腾讯科技(深圳)有限公司 任务的执行方法和装置、存储介质、电子装置
KR101975542B1 (ko) * 2018-11-07 2019-05-07 넷마블 주식회사 게임 공략 가이드 제공 방법 및 게임 공략 가이드 제공 장치
CN109865282B (zh) * 2019-03-05 2020-03-17 网易(杭州)网络有限公司 移动终端中的信息处理方法、装置、介质及电子设备
CN110141864B (zh) * 2019-04-30 2022-08-23 深圳市腾讯网域计算机网络有限公司 一种游戏自动测试方法、装置及终端
CN117482507A (zh) * 2019-07-19 2024-02-02 腾讯科技(深圳)有限公司 多人在线对战程序中的提醒信息发送方法、装置及终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100009733A1 (en) * 2008-07-13 2010-01-14 Sony Computer Entertainment America Inc. Game aim assist
CN107398071A (zh) * 2017-07-19 2017-11-28 网易(杭州)网络有限公司 游戏目标选择方法及装置
WO2019044131A1 (ja) * 2017-09-04 2019-03-07 株式会社バンダイ ゲーム装置、プログラム及びゲームシステム
CN110064193A (zh) * 2019-04-29 2019-07-30 网易(杭州)网络有限公司 游戏中虚拟对象的操控控制方法、装置和移动终端
CN110413171A (zh) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN110448891A (zh) * 2019-08-08 2019-11-15 腾讯科技(深圳)有限公司 控制虚拟对象操作远程虚拟道具的方法、装置及存储介质
CN110743168A (zh) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 虚拟场景中的虚拟对象控制方法、计算机设备及存储介质
CN111589126A (zh) * 2020-04-23 2020-08-28 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质

Also Published As

Publication number Publication date
AU2021240132A1 (en) 2021-11-11
EP3936207A4 (en) 2022-07-06
JP2022533051A (ja) 2022-07-21
CA3133467A1 (en) 2021-10-23
CN111589126B (zh) 2023-07-04
KR20210143300A (ko) 2021-11-26
SG11202111219TA (en) 2021-11-29
EP3936207A1 (en) 2022-01-12
US20220072428A1 (en) 2022-03-10
CN111589126A (zh) 2020-08-28

Similar Documents

Publication Publication Date Title
WO2021213026A1 (zh) 虚拟对象的控制方法、装置、设备及存储介质
WO2021208614A1 (zh) 虚拟对象的控制方法、装置、设备和存储介质
WO2021244322A1 (zh) 瞄准虚拟对象的方法、装置、设备及存储介质
CN111672116B (zh) 控制虚拟对象释放技能的方法、装置、终端及存储介质
US20230068653A1 (en) Method and apparatus for controlling virtual object to use virtual prop, terminal, and medium
CN112138384B (zh) 虚拟投掷道具的使用方法、装置、终端及存储介质
US11931653B2 (en) Virtual object control method and apparatus, terminal, and storage medium
US20220379214A1 (en) Method and apparatus for a control interface in a virtual environment
WO2022156486A1 (zh) 虚拟道具的投放方法、装置、终端、存储介质及程序产品
KR102645535B1 (ko) 가상 장면에서의 가상 객체 제어 방법 및 장치, 디바이스 그리고 저장 매체
WO2022227958A1 (zh) 虚拟载具的显示方法、装置、设备以及存储介质
WO2023142617A1 (zh) 基于虚拟场景的射线显示方法、装置、设备以及存储介质
KR20220042299A (ko) 가상 환경의 픽처를 디스플레이하는 방법 및 장치, 디바이스, 그리고 매체
JP2023164787A (ja) 仮想環境の画面表示方法、装置、機器及びコンピュータプログラム
JP7476235B2 (ja) 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム
CN114210062A (zh) 虚拟道具的使用方法、装置、终端、存储介质及程序产品

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20217035074

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021778333

Country of ref document: EP

Effective date: 20211008

ENP Entry into the national phase

Ref document number: 2021566600

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021240132

Country of ref document: AU

Date of ref document: 20210308

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21778333

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE