CN112402969A - Virtual object control method, device, equipment and storage medium in virtual scene - Google Patents

Virtual object control method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN112402969A
CN112402969A CN202011306335.8A CN202011306335A CN112402969A CN 112402969 A CN112402969 A CN 112402969A CN 202011306335 A CN202011306335 A CN 202011306335A CN 112402969 A CN112402969 A CN 112402969A
Authority
CN
China
Prior art keywords
virtual object
virtual
target
distance
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011306335.8A
Other languages
Chinese (zh)
Other versions
CN112402969B (en
Inventor
王扬
张丽杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011306335.8A priority Critical patent/CN112402969B/en
Publication of CN112402969A publication Critical patent/CN112402969A/en
Application granted granted Critical
Publication of CN112402969B publication Critical patent/CN112402969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2250/00Miscellaneous game characteristics
    • A63F2250/30Miscellaneous game characteristics with a three-dimensional image
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for controlling a virtual object in a virtual scene, and belongs to the technical field of virtual scenes. The method comprises the following steps: locking the target virtual object in response to the first virtual object being equipped with the first virtual prop and the distance between the target virtual object and the first virtual object being less than or equal to a first distance; in response to receiving the target operation, controlling the first virtual object to move towards the target virtual object; and in response to the first virtual object moving to a distance smaller than or equal to a second distance from the target virtual object, controlling the first virtual object to act on the target virtual object by using the first virtual prop. The embodiment of the application can reduce the operation difficulty of the virtual object provided with the first virtual prop, thereby reducing the duration of a single virtual scene and further saving the electric quantity and data flow consumed by the terminal.

Description

Virtual object control method, device, equipment and storage medium in virtual scene
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a virtual object in a virtual scene.
Background
Currently, in game type applications with virtual props, for example, in first person shooter type games, the virtual props may include virtual props for close combat or virtual props for far combat.
In the related art, because the attacking distance of the virtual prop for close combat is small, a user needs to manually control the virtual object to move to the vicinity of an attack target, then adjust the virtual object to face the attack target and use the close combat virtual prop, so as to achieve the purpose of damaging the attack target.
However, the related art needs to control the virtual object to move to the vicinity of the attack target and adjust the orientation, which results in that a user using the close-combat virtual item takes a long time to seek an attack opportunity, resulting in waste of resources such as electric quantity and data traffic.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for controlling a virtual object in a virtual scene, which can reduce the operation complexity of the virtual object equipped with a specified virtual item, thereby reducing the waste of resources such as electric quantity and data flow. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for controlling a virtual object in a virtual scene, where the method includes:
responsive to a first virtual object being equipped with a first virtual prop and a distance between a target virtual object and the first virtual object being less than or equal to a first distance, locking the target virtual object;
in response to receiving a target operation, controlling the first virtual object to move towards the target virtual object; the target operation is an operation using the first virtual prop;
in response to the first virtual object moving to a distance less than or equal to a second distance from the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
In one aspect, an embodiment of the present application provides a method for controlling a virtual object in a virtual scene, where the method includes:
in response to the first virtual object being equipped with the first virtual item, presenting a virtual scene screen;
in response to that the distance between the target virtual object and the first virtual object is smaller than or equal to a first distance, displaying a target locking icon in an overlapping mode at the position of the target virtual object; the target-locking pattern is to indicate that the target virtual object is locked;
in response to receiving a target operation, controlling the first virtual object to move towards the target virtual object; the target operation is an operation using the first virtual prop;
in response to the first virtual object moving to a distance less than or equal to a second distance from the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
In one aspect, an embodiment of the present application provides a method for controlling a virtual object in a virtual scene, where the method includes:
in response to the first virtual object not being equipped with the first virtual prop, presenting a virtual scene picture at a first person perspective;
in response to receiving the first operation, displaying the virtual scene picture at a third person weighing view angle; the first operation is to control the first virtual object to equip the first virtual prop;
in response to receiving a target operation and the distance between the first virtual object and the target virtual object is smaller than or equal to a second distance, controlling the first virtual object to act on the target virtual object by using the first virtual prop; the target operation is an operation using the first virtual prop.
On the other hand, an embodiment of the present application provides an apparatus for controlling a virtual object in a virtual scene, where the apparatus includes:
a target locking module for locking a target virtual object in response to the first virtual object being equipped with a first virtual prop and a distance between the target virtual object and the first virtual object being less than or equal to a first distance;
an object moving module, configured to control the first virtual object to move to a target virtual object in response to receiving a target operation; the target operation is an operation using the first virtual prop;
and the prop using module is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is less than or equal to a second distance.
In one possible implementation, the target locking module includes:
the set acquisition submodule is used for acquiring a set of alternative objects; the candidate object set is a set formed by second virtual objects, wherein the distance between the candidate object set and the position of the first virtual object at the current moment is smaller than or equal to the first distance;
and the target determining submodule is used for determining the target virtual object from the candidate object set based on the target attribute information of the virtual object in the candidate object set.
In one possible implementation, in response to the target attribute information containing distance information, the distance information is used to indicate a distance between the corresponding virtual object and the first virtual object;
the target determination submodule includes:
a first target determining unit, configured to use the second virtual object closest to the first virtual object in the candidate object set as the target virtual object.
In one possible implementation, in response to the target attribute information including a first attribute value;
the target determination submodule includes:
a second target determining unit, configured to use the second virtual object with the largest or smallest corresponding first attribute value in the candidate object set as the target virtual object.
In one possible implementation, the apparatus further includes:
a first picture display module, configured to display a virtual scene picture according to a first human perspective in response to the first virtual object not being equipped with the first virtual item or in response to the first virtual object being equipped with the first virtual item and the first virtual item being in a throwable state;
and the second picture display module is used for responding to the first virtual object to equip the first virtual prop, enabling the first virtual prop to be in a non-throwing state, and displaying a virtual scene picture according to a third person name visual angle.
In one possible implementation, the prop usage module includes:
and the prop using sub-module is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the fact that the distance between the target virtual object and the first virtual object is smaller than or equal to the second distance and no occlusion exists between the first virtual object and the target virtual object.
In one possible implementation, the object moving module includes:
an object moving sub-module, configured to, in response to receiving the target operation, control the first virtual object to move towards the target virtual object at a first speed; the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
In one possible implementation, the object moving module includes:
and the action execution sub-module is used for controlling the first virtual object to execute a first action in the process of moving to the target virtual object in response to the target operation.
In a possible implementation manner, the first action is used to trigger deduction of a first attribute value of a third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
In one possible implementation manner, in response to the first virtual object being in the process of moving, the apparatus further includes:
the rolling action execution module is used for responding to the fact that touch operation on a target control is received in the process that the first virtual object is provided with the first virtual prop and moves, and controlling the first virtual object to execute a rolling action; the target control is a control for controlling scrolling.
In one possible implementation, the tumbling action performing module includes:
the direction obtaining sub-module is used for responding to the fact that in the process that the first virtual object is provided with the first virtual prop and moves, receiving touch operation conducted on a target control, and obtaining the operation direction of the touch operation conducted on a direction control, wherein the direction control is used for controlling the moving direction of the first virtual object;
the type determining submodule is used for determining the rolling type of the rolling action based on the operation direction;
and the action control sub-module is used for controlling the first virtual object to execute the rolling action based on the rolling type.
On the other hand, an embodiment of the present application provides an apparatus for controlling a virtual object in a virtual scene, where the apparatus includes:
the picture display module is used for responding to the fact that the first virtual object is provided with the first virtual prop and displaying a virtual scene picture;
the icon display module is used for displaying a target locking icon in a superposed manner at the position of the target virtual object in response to the fact that the distance between the target virtual object and the first virtual object is smaller than or equal to a first distance; the target-locking pattern is to indicate that the target virtual object is locked;
an object moving module, configured to control the first virtual object to move to a target virtual object in response to receiving a target operation; the target operation is an operation using the first virtual prop;
and the prop using module is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is less than or equal to a second distance.
In one possible implementation, the apparatus further includes:
the control switching module is used for responding to the received target operation and switching the first control into the target control before controlling the first virtual object to move to the target virtual object; the first control is a control for controlling squatting; the target control is a control for controlling scrolling.
On the other hand, an embodiment of the present application provides an apparatus for controlling a virtual object in a virtual scene, where the apparatus includes:
the first picture display module is used for responding to the fact that the first virtual object is not provided with the first virtual prop, and displaying the virtual scene picture at a first person visual angle;
the second picture display module is used for responding to the received first operation and displaying the virtual scene picture at a third person weighing view angle; the first operation is to control the first virtual object to equip the first virtual prop;
the prop using module is used for responding to the fact that a target operation is received, and the distance between the first virtual object and the target virtual object is smaller than or equal to a second distance, and controlling the first virtual object to act on the target virtual object by using the first virtual prop; the target operation is an operation using the first virtual prop.
In one possible implementation, the apparatus further includes:
and the target locking module is used for responding to that a target operation is received and the distance between the first virtual object and the target virtual object is smaller than or equal to a second distance, controlling the first virtual object to lock the target virtual object in response to that the distance between the target virtual object and the first virtual object is smaller than or equal to a first distance before the first virtual object acts on the target virtual object by using the first virtual prop.
In one possible implementation, the target locking module includes:
the set acquisition submodule is used for acquiring a set of alternative objects; the candidate object set is a set formed by second virtual objects, wherein the distance between the candidate object set and the position of the first virtual object at the current moment is smaller than or equal to the first distance;
and the target determining submodule is used for determining the target virtual object from the candidate object set based on the target attribute information of the virtual object in the candidate object set.
In one possible implementation, in response to the target attribute information containing distance information, the distance information is used to indicate a distance between the corresponding virtual object and the first virtual object;
the target determination submodule includes:
a first target determining unit, configured to use the second virtual object closest to the first virtual object in the candidate object set as the target virtual object.
In one possible implementation, in response to the target attribute information including a first attribute value;
the target determination submodule includes:
a second target determining unit, configured to use the second virtual object with the largest or smallest corresponding first attribute value in the candidate object set as the target virtual object.
In one possible implementation manner, the second screen display module includes:
and the second picture display submodule is used for responding to the first virtual object to equip the first virtual prop, enabling the first virtual prop to be in a non-throwing state, and displaying the virtual scene picture according to a third person name visual angle.
In one possible implementation, the apparatus further includes:
and the picture display module is used for responding to the condition that the first virtual object is provided with the first virtual prop and the first virtual prop is in a throwable state, and displaying the virtual scene picture according to a first person visual angle.
In one possible implementation, the prop usage module includes:
an object moving sub-module, configured to control the first virtual object to move to the target virtual object in response to receiving the target operation;
and the prop using submodule is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is smaller than or equal to the second distance.
In one possible implementation, the prop usage submodule includes:
and the prop using unit is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the fact that the distance between the target virtual object and the first virtual object is smaller than or equal to the second distance and no occlusion exists between the first virtual object and the target virtual object.
In one possible implementation, the object moving sub-module includes:
an object moving unit configured to control the first virtual object to move to the target virtual object at a first speed in response to receiving the target operation; the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
In one possible implementation, the object moving sub-module includes:
and the action execution unit is used for responding to the received target operation and controlling the first virtual object to execute a first action in the process of moving to the target virtual object.
In a possible implementation manner, the first action is used to trigger deduction of a first attribute value of a third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
In one possible implementation manner, in response to the first virtual object being in the process of moving, the apparatus further includes:
the rolling execution sub-module is used for responding to the receiving of touch operation on a target control in the process that the first virtual object is provided with the first virtual prop and moves, and controlling the first virtual object to execute rolling action; the target control is a control for controlling scrolling.
In one possible implementation, the rolling execution sub-module includes:
the direction obtaining unit is used for responding to the fact that in the process that the first virtual object is provided with the first virtual prop and moves, receiving touch operation on a target control, and obtaining the operation direction of the touch operation on a direction control, wherein the direction control is used for controlling the moving direction of the first virtual object;
a type determination unit configured to determine a scroll type of the scroll action based on the operation direction;
a scroll executing unit for controlling the first virtual object to execute the scroll action based on the scroll type.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the method for controlling a virtual object in a virtual scene according to the foregoing aspect.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for controlling a virtual object in a virtual scene according to the above aspect.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the virtual object control method in the virtual scene provided in the various optional implementations of the above aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
when the first virtual object is provided with the first virtual prop, the target virtual object with the first distance or less is automatically locked, the first virtual object is controlled to automatically move towards the target virtual object when the target operation is received, and the first virtual prop is used for the target virtual object when the distance between the first virtual object and the target virtual object is less than or equal to the second distance, so that the operation difficulty of the virtual object provided with the first virtual prop is reduced, the user operation time for attacking the target virtual object by the first virtual object using the first virtual prop is reduced, the duration of a single virtual scene is reduced, and the electric quantity and the data flow consumed by the terminal are saved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a control flow of a virtual object in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a control flow of a virtual object in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a control flow of a virtual object in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for controlling a flow of virtual objects in a virtual scene according to an exemplary embodiment of the present application;
FIG. 7 is a schematic view of an interface for switching the viewing angles according to the embodiment shown in FIG. 6;
FIG. 8 is a schematic diagram of candidate set acquisition according to the embodiment shown in FIG. 6;
FIG. 9 is a diagram illustrating a first virtual object moving process according to the embodiment shown in FIG. 6;
FIG. 10 is a schematic diagram of the operation direction and the corresponding relationship of the tumbling type according to the embodiment shown in FIG. 6;
FIG. 11 is a schematic diagram of the embodiment of FIG. 6 relating to a tumbling action;
FIG. 12 is a schematic view of throwing a first virtual prop according to the embodiment of FIG. 6;
fig. 13 is a block diagram illustrating a configuration of a virtual object control apparatus in a virtual scene according to an exemplary embodiment of the present application;
FIG. 14 is a schematic diagram illustrating a computer device according to an exemplary embodiment of the present application;
fig. 15 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Virtual scene: is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene may also be used for fighting between at least two virtual characters using a virtual firearm within a target area that may be continually smaller over time in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
Virtual props: the tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon which can hurt other virtual objects, such as a pistol, a rifle, a sniper, a dagger, a knife, a sword, an axe and the like, and a supply tool such as a bullet, wherein a quick cartridge clip, a sighting telescope, a silencer and the like are arranged on the appointed virtual weapon, and can provide a virtual pendant with partial added attributes for the virtual weapon, and defense tools such as a shield, a armor, a armored car and the like.
First person shooter game: the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online battle program. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be any one of military Simulation programs, Multiplayer Online Battle Arena Games (MOBA), large-escape shooting Games, and Simulation strategy Games (SLG). In the present embodiment, the application 111 is an FPS (First Person shooter Game) for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as a simulated character or an animation character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online battle program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of a military simulation program, an MOBA game, a large fleeing and killing shooting game, and an SLG game, and in this embodiment, the application 131 is an FPS game as an example. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, where the second virtual object may be referred to as a master virtual character of the second user 132. Illustratively, the second virtual object is a second virtual character, such as a simulated character or an animation character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a memory 121, a processor 122, a user account database 123, a combat services module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual object 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. The virtual object 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the currently controlled virtual object 210, for example, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The currently controlled virtual object 210 may release skills or use virtual props, move and execute a specified action under the control of the user, and the virtual object in the virtual scene may show different three-dimensional models under the control of the user, for example, a screen of the terminal supports touch operation, and a scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual object 210 may execute the specified action in the virtual scene and show a currently corresponding three-dimensional model.
The terminal may control, by using the virtual object control method in the virtual scene, the virtual object equipped with the specified virtual item to move around the target virtual object, and use the specified virtual item, and fig. 3 shows a schematic diagram of a control flow of the virtual object in the virtual scene provided in an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 3, the computer device may control the virtual object by performing the following steps.
Step 301, in response to the first virtual object being equipped with the first virtual prop and the distance between the target virtual object and the first virtual object being less than or equal to the first distance, locking the target virtual object.
In the embodiment of the application, the terminal detects the distance between the first virtual object and each second virtual object in real time, and when the distance between the first virtual object and any one of the second virtual objects is detected to be smaller than or equal to the first distance, the second virtual object, the distance between which is smaller than or equal to the first distance, is locked as the target virtual object.
In one possible implementation manner, in response to the first virtual object being equipped with the first virtual item, the displayed virtual scene picture contains the first virtual object equipped with the first virtual item, or contains a partial model of the first virtual object equipped with the first virtual item, or the first virtual object equipped with the first virtual item is positioned outside the virtual scene picture.
The virtual scene picture can further include a second virtual object, and the second virtual object is a virtual object in a different battle with the first virtual object.
In one possible implementation, the first virtual object being equipped with the first virtual prop means that the first virtual object holds the first virtual prop.
The first virtual prop can be a virtual weapon for close combat, such as a dagger, a sword, a stick, a pan and the like, which are close to attack.
In one possible implementation, the target virtual object is one of the second virtual objects whose distance from the first virtual object is less than or equal to the first distance.
In another possible implementation manner, the first virtual object takes the first distance as a radius, and a circular range taking a current position of the first virtual object as a center of a circle is taken as the first range. When the fact that a second virtual object exists in the first range of the first virtual object is monitored in real time, the target virtual object is determined from the first virtual object, and the target virtual object is locked.
In one possible implementation, after the target virtual object is locked, a target locking icon is displayed in an overlapping manner at the position of the target virtual object on the interface.
Step 302, in response to receiving the target operation, controlling the first virtual object to move to the target virtual object; the target operation is an operation using the first virtual prop.
In the embodiment of the application, when the terminal receives the target operation, the terminal controls the first virtual object to automatically move to the locked target virtual object.
In one possible implementation manner, when the first virtual object is equipped with the first virtual item, a displayed touch operation received by using a target control of the first virtual item is superposed on the interface to serve as a target operation.
The speed or acceleration of the terminal for controlling the first virtual object to move towards the target virtual object is preset, and the terminal can control the first virtual object to move towards the target virtual object at the preset speed or acceleration.
And 303, in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is smaller than or equal to the second distance, controlling the first virtual object to act on the target virtual object by using the first virtual prop.
In the embodiment of the application, when the first virtual object moves to a distance smaller than or equal to the second distance from the target virtual object, the first virtual object is controlled to act on the target virtual object by using the first virtual prop.
In a possible implementation manner, after the first virtual object is controlled to act on the target virtual object by using the first virtual prop, whether the target virtual object is hit by the first virtual prop is determined, and whether the target virtual object needs to modify the first attribute value is determined according to the hit condition of the target virtual object.
To sum up, according to the scheme shown in the embodiment of the present application, when the first virtual object is equipped with the first virtual item, the target virtual object with the first distance or less is automatically locked, and the first virtual object is controlled to automatically move to the target virtual object when the target operation is received, and when the distance between the target virtual object and the first virtual object is less than or equal to the second distance, the first virtual item is used for the target virtual object, so as to reduce the operation difficulty of the virtual object equipped with the first virtual item, thereby reducing the user operation time for using the first virtual object of the first virtual item to attack the target virtual object, thereby reducing the duration of a single virtual scene, and further saving the electric quantity and data traffic consumed by the terminal.
Fig. 4 is a schematic diagram illustrating a control flow of a virtual object in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 4, the computer device may control the virtual object by performing the following steps.
Step 401, in response to the first virtual object being equipped with the first virtual item, presenting a virtual scene picture.
Step 402, in response to that the distance between the target virtual object and the first virtual object is smaller than or equal to a first distance, displaying a target locking icon in an overlapping manner at the position of the target virtual object; the target-lock pattern is used to indicate that the target virtual object is locked.
After the target virtual object is determined, the target locking icon is displayed in an overlapping mode at the current position of the target virtual object. The target locking icon is used for indicating the position of the target virtual object displayed on the terminal interface.
In a possible implementation manner, the terminal switches the first control to be the target control; the first control is a control for controlling squatting; the target control is a control for controlling scrolling.
Step 403, in response to receiving the target operation, controlling the first virtual object to move to the target virtual object; the target operation is an operation using the first virtual prop.
And step 404, in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is smaller than or equal to the second distance, controlling the first virtual object to act on the target virtual object by using the first virtual prop.
To sum up, according to the scheme shown in the embodiment of the present application, when the first virtual object is equipped with the first virtual item, the target virtual object with the first distance or less is automatically locked, and the first virtual object is controlled to automatically move to the target virtual object when the target operation is received, and when the distance between the target virtual object and the first virtual object is less than or equal to the second distance, the first virtual item is used for the target virtual object, so as to reduce the operation difficulty of the virtual object equipped with the first virtual item, thereby reducing the user operation time for using the first virtual object of the first virtual item to attack the target virtual object, thereby reducing the duration of a single virtual scene, and further saving the electric quantity and data traffic consumed by the terminal.
Fig. 5 is a schematic diagram illustrating a control flow of a virtual object in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 5, the computer device may control the virtual object by performing the following steps.
Step 501, in response to that the first virtual object is not equipped with the first virtual item, displaying a virtual scene picture at a first person perspective.
Step 502, in response to receiving the first operation, displaying a virtual scene picture at a third person weighing view angle; the first operation is to control a first virtual object to equip a first virtual prop.
Step 503, in response to receiving the target operation and when the distance between the first virtual object and the target virtual object is less than or equal to the second distance, controlling the first virtual object to act on the target virtual object by using the first virtual prop; the target operation is an operation using the first virtual prop.
To sum up, according to the scheme shown in the embodiment of the present application, when the first virtual object is equipped with the first virtual item, the target virtual object with the first distance or less is automatically locked, and the first virtual object is controlled to automatically move to the target virtual object when the target operation is received, and when the distance between the target virtual object and the first virtual object is less than or equal to the second distance, the first virtual item is used for the target virtual object, so as to reduce the operation difficulty of the virtual object equipped with the first virtual item, thereby reducing the user operation time for using the first virtual object of the first virtual item to attack the target virtual object, thereby reducing the duration of a single virtual scene, and further saving the electric quantity and data traffic consumed by the terminal.
Fig. 6 is a flowchart illustrating a method for controlling a flow of a virtual object in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 6, taking the computer device as a terminal as an example, the terminal may control a virtual object in a virtual scene by performing the following steps.
Step 601, displaying a virtual scene picture.
In this embodiment of the application, the terminal displays a virtual scene picture, where the virtual scene picture includes a first virtual object and the first virtual object is equipped with a first virtual prop, or includes a partial model of the first virtual object, or the first virtual object equipped with the first virtual prop is outside the virtual scene picture.
The first virtual prop can be a virtual weapon for close combat, such as a dagger, a sword, a stick, a pan and the like, which are close to attack.
In one possible implementation, in response to the first virtual object not being equipped with the first virtual item, or in response to the first virtual object being equipped with the first virtual item and the first virtual item being in a throwable state, presenting the virtual scene picture at a first person perspective; responding to the first virtual object equipped with the first virtual prop, wherein the first virtual prop is in a non-throwing state, and displaying a virtual scene picture according to a third person weighing visual angle.
Wherein the first virtual prop includes a throwable state and a non-throwable state. The current state of the first virtual prop can be switched by receiving touch operation on the state switching control.
For example, fig. 7 is a schematic view illustrating an interface for switching a viewing angle according to an exemplary embodiment of the present application. As shown in fig. 7, when the first virtual object holds a virtual gun, the terminal displays a virtual scene screen 71 at a first person perspective, and when the first virtual object uses a specified skill or directly switches a held weapon, the virtual gun for far fighting is switched to a virtual weapon for near fighting, and the virtual weapon is the first virtual item, the terminal displays a virtual scene screen 72 at a third person perspective.
When the first virtual object is switched from the virtual weapon for far combat to the virtual weapon for near combat, the virtual scene picture displayed by the terminal is switched from the virtual scene picture at the first person view angle to the virtual scene picture at the third person view angle.
Step 602, in response to the first virtual object being equipped with the first virtual item, a set of candidate objects is obtained.
In the embodiment of the application, the terminal acquires a candidate object set containing a target virtual object. The candidate object set is a set of second virtual objects having a distance from the position of the first virtual object at the current time that is less than or equal to the first distance.
In a possible implementation manner, the first virtual object takes the first distance as a radius, and a circular range taking a current position of the first virtual object as a center of a circle is taken as the first range.
In another possible implementation, the first range is a range within a specified angle in front of the first virtual object and at a distance from the first virtual object that is less than the first distance.
The first range may be a range displayed on the virtual scene screen, the distance to the first virtual object being equal to or less than the first distance.
In a possible implementation manner, in the process of moving the first virtual object, the second virtual objects in the virtual scene picture displayed on the terminal interface change in real time along with the movement of the first virtual object, and each second virtual object in the range that the distance from the first virtual object is smaller than the first distance and within a specified angle in front of the position where the first virtual object is located at the current moment is acquired as the candidate object set. The currently acquired candidate set can be cached and real-time detection and update can be carried out.
In another possible implementation manner, each second virtual object included in the virtual scene picture displayed by the terminal is first acquired, then a distance value between each second virtual object and the first virtual object is acquired, and when the acquired distance value between the second virtual object and the first virtual object is smaller than the first distance, the second virtual object is added to the candidate object set.
For example, fig. 8 shows a schematic diagram of candidate set acquisition provided in an exemplary embodiment of the present application. As shown in fig. 8, the first virtual object 85 holding the first virtual item travels through the virtual scene screen, and the virtual scene screen displayed at the terminal at the present time includes the second virtual object 82, the second virtual object 83, and the second virtual object 84. Based on the first range 81 corresponding to the position where the first virtual object 85 is currently located, it may be determined that the second virtual object 82 is located within the first range 81, the second virtual object 83 is shown in the virtual scene screen but is located outside the first range 81, and the second virtual object 84 is also located within the first range 81. It can be obtained that the candidate set corresponding to the first virtual object 85 at the current time includes the second virtual object 82 and the second virtual object 84.
Step 603, determining a target virtual object from the candidate object set based on the target attribute information of the virtual object in the candidate object set.
In the embodiment of the application, the target attribute information of each virtual object in the candidate object set is obtained based on the candidate object set obtained at the current moment, and one virtual object is determined as the target virtual object from the candidate object set according to the target attribute information of each virtual object.
In one possible implementation, the target virtual object is locked based on the target attribute information in response to a distance between the target virtual object and the first virtual object being less than or equal to a first distance. The way in which the target virtual object is determined within the set of alternatives is different based on different kinds of target property information. The target virtual object is determined as follows.
1) And in response to the target attribute information containing the distance information, taking a second virtual object which is closest to the first virtual object in the candidate object set as a target virtual object.
The distance information is used for indicating the distance between the corresponding virtual object and the first virtual object.
In a possible implementation manner, while the candidate object set is obtained, the distance between each virtual object in the candidate object set and the first virtual object is obtained, the virtual objects in the candidate object set are sequentially sorted from small to large according to the distance value, and the candidate object set with the order is updated in real time. And determining the virtual object in the first order as a target virtual object.
For example, as shown in fig. 8, the acquired candidate set includes a second virtual object 82 and a second virtual object 84, the position coordinates corresponding to the second virtual object 82 and the second virtual object 84 at the current time and the position coordinates corresponding to the first virtual object are acquired, the distances between the second virtual object 82 and the second virtual object 84 at the current time and the first virtual object 85 are calculated according to the position coordinates corresponding to the second virtual object 82 and the position coordinates corresponding to the second virtual object 84 at the current time and the position coordinates corresponding to the first virtual object, the second virtual object 84 with the smaller distance is determined as the target virtual object, and the target lock icon is displayed in a position of the second virtual object 84 in an overlapping manner.
2) And in response to the target attribute information containing the first attribute value, taking a second virtual object with the maximum or minimum corresponding first attribute value in the candidate object set as the target virtual object.
In one possible implementation, the first attribute value is used to indicate a health condition of the virtual object or a combat power condition of the virtual object.
For example, the first attribute value may be a vital value, an energy value, or the like, that is indicative of the health of the virtual object; the first attribute value may also be a value indicating the fighting power of the virtual object, such as the number of rejected virtual objects, a weapon attack power value, a armour defense power value, and the like.
Taking the first attribute value as a life value as an example, when the acquired candidate object set includes a virtual object a, a virtual object B and a virtual object C, the life values of the virtual object a, the virtual object B and the virtual object C at the current time are automatically detected, and when a second virtual object with the smallest first attribute value is taken as a target virtual object, a virtual object with the smallest life value among the virtual object a, the virtual object B and the virtual object C is taken as the target virtual object. At this time, the attack process of the first virtual object can be completed quickly, and the duration of the single virtual scene is reduced. Taking the first attribute value as the armour defense strength value as an example, when the acquired candidate set comprises the virtual object a, the virtual object B and the virtual object C, automatically detecting the armour defense strength value of the virtual object a, the virtual object B and the virtual object C at the current moment, and when the second virtual object with the largest first attribute value is taken as the target virtual object, selecting the virtual object with the largest armour defense strength value of the virtual object a, the virtual object B and the virtual object C as the target virtual object. At this time, the attack process of the first virtual object can be rapidly completed to the virtual object which is difficult to be eliminated, and the duration of the single virtual scene is shortened.
Step 604, in response to receiving the target operation, controlling the first virtual object to move to the target virtual object.
In the embodiment of the application, when the terminal receives the target operation, the terminal controls the first virtual object to move to the target virtual object determined at the current moment. The target operation is an operation using the first virtual prop.
The operation using the first virtual prop may be an operation performed by a user through touch operation on a designated control.
In one possible implementation, the first virtual object is controlled to move towards the target virtual object at a first speed in response to receiving the target operation, or the first virtual object is controlled to move towards the target virtual object at a first acceleration in response to receiving the target operation.
Wherein the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
The first virtual object is controlled to move towards the target virtual object through the first speed or the first acceleration, so that the advantage of the first virtual object in the aspect of moving speed can be improved, the weakness of the first virtual prop in an attack range is made up to a certain extent, and the antagonism between the first virtual object using the first virtual prop and the second virtual object using other virtual props is improved.
During the process that the first virtual object moves to the target virtual object, the target virtual object and other second virtual objects can be shot to the first virtual object remotely, and the target virtual object can move at a moving speed when the target operation is not received.
For example, during the sprint of the first virtual object, the target virtual object or the second virtual object may also eliminate the first virtual object with the remote weapon, which may balance the ability of the near weapon and the remote weapon.
For example, fig. 9 is a schematic diagram illustrating a first virtual object moving process according to an exemplary embodiment of the present application. As shown in fig. 9, in the first virtual scene 91, the first virtual object 911 holds the first virtual item as a virtual sword, and the target virtual object 912 is locked, and at this time, when the target operation, that is, the touch operation on the attack control 913 is received, the second virtual scene 92 is displayed, the terminal controls the first virtual object 921 to move to the target virtual object 922 at the first speed, and when the first virtual object 921 moves to the virtual sword attack range, the third virtual scene 93 is displayed, and the first virtual object 931 is controlled to execute the first action, that is, the hacking action, on the target virtual object 932.
Illustratively, when a target virtual object is locked, when a user performs touch operation on an attack control, a first virtual object operated by the user automatically sprints the target virtual object, and starts playing an attack action, namely the first action, in the sprint process, when the sprint is in front of the target virtual object, a specified virtual prop just attacks the target virtual object, so that elimination of the target virtual object can be realized. And the function of the sprint is in accordance with the physical law.
In one possible implementation, in response to receiving the target operation, the first virtual object is controlled to perform a first action in moving to the target virtual object.
Wherein the first action is used for triggering deduction of a first attribute value of the third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
For example, when the first virtual prop held by the first virtual object is a virtual sword, the first action corresponding to the virtual sword is to wave and chop the virtual object within the action range, the first virtual object may continuously detect the periphery of the movement path while the first virtual object moves to the target virtual object, and when it is detected that a third virtual object exists within the action range of the current position of the first virtual object, the first virtual object is controlled to wave and chop the third virtual object, and the first attribute value of the third virtual object is reduced, for example, the life value of the third virtual object is reduced.
In one possible implementation manner, in response to the first virtual object being in the process of moving and in response to receiving the touch operation on the target control in the process of the first virtual object being equipped with the first virtual prop and moving, the first virtual object is controlled to execute the rolling action.
Wherein the target control may be a control for controlling scrolling.
In one possible implementation manner, in response to that the first virtual object does not receive the target operation, the virtual control initially displayed at the position where the target control is superimposed on the virtual screen is a control for controlling squatting.
For example, when the terminal receives a touch operation on a control for controlling scrolling while the first virtual object is automatically moving to the target virtual object, the terminal controls the first virtual object to perform scrolling action in the current moving direction. By rolling in the moving process, the remote shooting of the second virtual object can be avoided, the situation of elimination in the moving process is reduced, and the success rate of moving to the target virtual object is improved.
In a possible implementation manner, in response to receiving a touch operation performed on a target control in a process that a first virtual object is equipped with a first virtual item and moves, an operation direction of the touch operation performed on a direction control is acquired, where the direction control is used for controlling a moving direction of the first virtual object; then determining the rolling type of the rolling action based on the operation direction; and finally controlling the first virtual object to execute the rolling action based on the rolling type.
The method comprises the steps that a virtual object except a first virtual prop is used, touch operation on a rolling control piece is received in the running process, and the first virtual object can be controlled to execute sliding shovel action. When the virtual object uses the first virtual prop, a touch operation on the squat control is received in the running process, and the first virtual object can be controlled to execute a forward rolling action.
In one possible implementation, the type of roll includes at least one of a front roll, a rear roll, a left roll, a right roll, a left front roll, a right front roll, a left rear roll, and a right rear roll.
Wherein, the switching between each action is realized by using an animation state machine of a mixed tree.
For example, fig. 10 is a schematic diagram illustrating correspondence between an operation direction and a roll type according to an exemplary embodiment of the present application. As shown in fig. 10, in response to receiving a touch operation performed on the directional control 1001 while the first virtual object is equipped with the first virtual item and moves, an operation direction of the touch operation performed on the directional control 001 is acquired, and then a scroll type of a scroll action is determined based on the operation direction; and finally controlling the first virtual object to execute the rolling action based on the rolling type. The method includes controlling a first virtual object to perform a front roll action when an operation direction is an upward slide, controlling the first virtual object to perform a front left roll action when the operation direction is an upward slide, controlling the first virtual object to perform a front right roll action when the operation direction is an upward slide, controlling the first virtual object to perform a left roll action when the operation direction is an upward slide, controlling the first virtual object to perform a right roll action when the operation direction is an upward slide, controlling the first virtual object to perform a rear idle roll action when the operation direction is a downward slide, controlling the first virtual object to perform a rear left idle roll action when the operation direction is a downward slide, and controlling the first virtual object to perform a rear right idle roll action when the operation direction is a downward slide.
For example, FIG. 11 illustrates a scrolling action provided by an exemplary embodiment of the present application. As shown in fig. 11, in the process that the first virtual object 1110 is equipped with the first virtual item and moves, the touch operation performed on the direction control 1140 is received, the operation direction of the touch operation performed on the direction control 1140 is obtained as a left direction, and when the touch operation performed on the scroll control 1130 is received, the first virtual object 1110 is controlled to execute a left scroll action. When the first virtual object 1120 is equipped with the first virtual prop and moves, the touch operation performed on the direction control 1140 is received, the operation direction of the touch operation performed on the direction control 1140 is obtained as the lower right direction, and when the touch operation performed on the scroll control 1130 is received, the first virtual object 1120 is controlled to execute a rear right flip action.
Step 605, in response to the first virtual object moving to a distance smaller than or equal to the second distance from the target virtual object, controlling the first virtual object to act on the target virtual object by using the first virtual prop.
In the embodiment of the application, when the first virtual object moves to a distance smaller than or equal to the second distance from the target virtual object, the terminal controls the first virtual object to act on the target virtual object by using the first virtual prop.
In a possible implementation manner, the first attribute value of the target virtual object which receives the attack of the first virtual item is modified to be 0, and the target virtual object is determined to be in a eliminated state.
In one possible implementation manner, in response to that the distance between the target virtual object and the first virtual object is smaller than or equal to the second distance and no occlusion exists between the first virtual object and the target virtual object, the first virtual object is controlled to use the specified virtual prop for the target virtual object.
In another possible implementation, in response to the target virtual object being within the second range of the first virtual object and no obstruction existing between the first virtual object and the target virtual object, the first virtual object is controlled to use the designated virtual prop on the target virtual object.
For example, in the sprint process, if the target virtual object moves and finally moves to the back of the bunker, the first action of the first virtual object will be blocked by the bunker, and the elimination of the target virtual object cannot be realized.
In another possible implementation, the first virtual prop is converted from a close-fight virtual weapon to a far-fight virtual weapon by a specified operation.
The first virtual prop can be a close combat virtual weapon in a throwable state, and the attack range of the first virtual prop is expanded by throwing the first virtual prop. Throwing the first virtual object conforms to the logic of throwing the object, throwing the animation by the animation of the first virtual object model to drive the first virtual object model skeleton, throwing the animation by controlling the first virtual object model skeleton to control the behavior of the designated virtual object.
Illustratively, fig. 12 shows a schematic diagram of throwing a first virtual prop provided by an exemplary embodiment of the present application. As shown in fig. 12, a virtual scene screen 1210 displayed by the terminal includes a first virtual object 1211 and a second virtual object 1212 equipped with a first virtual item, the virtual scene screen 1210 is displayed at a third perspective view, and the terminal can change the first virtual item into a throwable state by receiving a touch operation on a throwing control 1213. After the terminal receives the touch operation on the throwing control 1213, the terminal displays the virtual scene screen 1220, the virtual scene screen 1220 is displayed at a first person viewing angle, and the direction and distance for throwing the first virtual prop can be controlled by receiving the sliding operation on the throwing control 1221.
To sum up, according to the scheme shown in the embodiment of the present application, when the first virtual object is equipped with the first virtual item, the target virtual object with the first distance or less is automatically locked, and the first virtual object is controlled to automatically move to the target virtual object when the target operation is received, and when the distance between the target virtual object and the first virtual object is less than or equal to the second distance, the first virtual item is used for the target virtual object, so as to reduce the operation difficulty of the virtual object equipped with the first virtual item, thereby reducing the user operation time for using the first virtual object of the first virtual item to attack the target virtual object, thereby reducing the duration of a single virtual scene, and further saving the electric quantity and data traffic consumed by the terminal.
Fig. 13 is a block diagram illustrating a virtual object control apparatus in a virtual scene according to an exemplary embodiment of the present application, where the apparatus may be disposed in the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or another terminal in the implementation environment, and the apparatus includes:
a target locking module 1310 for locking a target virtual object in response to the first virtual object being equipped with a first virtual prop and a distance between the target virtual object and the first virtual object being less than or equal to a first distance;
an object moving module 1320, configured to control the first virtual object to move to the target virtual object in response to receiving a target operation; the target operation is an operation using the first virtual prop;
a prop using module 1330, configured to control the first virtual object to act on the target virtual object using the first virtual prop in response to the first virtual object moving to a distance from the target virtual object that is less than or equal to a second distance.
In one possible implementation, the targeting module 1310 includes:
the set acquisition submodule is used for acquiring a set of alternative objects; the candidate object set is a set formed by second virtual objects, wherein the distance between the candidate object set and the position of the first virtual object at the current moment is smaller than or equal to the first distance;
and the target determining submodule is used for determining the target virtual object from the candidate object set based on the target attribute information of the virtual object in the candidate object set.
In one possible implementation, in response to the target attribute information containing distance information, the distance information is used to indicate a distance between the corresponding virtual object and the first virtual object;
the target determination submodule includes:
a first target determining unit, configured to use the second virtual object closest to the first virtual object in the candidate object set as the target virtual object.
In one possible implementation, in response to the target attribute information including a first attribute value;
the target determination submodule includes:
a second target determining unit, configured to use the second virtual object with the largest or smallest corresponding first attribute value in the candidate object set as the target virtual object.
In one possible implementation, the apparatus further includes:
a first picture display module, configured to display a virtual scene picture according to a first human perspective in response to the first virtual object not being equipped with the first virtual item or in response to the first virtual object being equipped with the first virtual item and the first virtual item being in a throwable state;
and the second picture display module is used for responding to the first virtual object to equip the first virtual prop, enabling the first virtual prop to be in a non-throwing state, and displaying a virtual scene picture according to a third person name visual angle.
In one possible implementation, the prop using module 1330 includes:
and the prop using sub-module is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the fact that the distance between the target virtual object and the first virtual object is smaller than or equal to the second distance and no occlusion exists between the first virtual object and the target virtual object.
In one possible implementation, the object moving module 1320 includes:
an object moving sub-module, configured to, in response to receiving the target operation, control the first virtual object to move towards the target virtual object at a first speed; the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
In one possible implementation, the object moving module 1320 includes:
and the action execution sub-module is used for controlling the first virtual object to execute a first action in the process of moving to the target virtual object in response to the target operation.
In a possible implementation manner, the first action is used to trigger deduction of a first attribute value of a third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
In one possible implementation manner, in response to the first virtual object being in the process of moving, the apparatus further includes:
the rolling action execution module is used for responding to the fact that touch operation on a target control is received in the process that the first virtual object is provided with the first virtual prop and moves, and controlling the first virtual object to execute a rolling action; the target control is a control for controlling scrolling.
In one possible implementation, the tumbling action performing module includes:
the direction obtaining sub-module is used for responding to the fact that in the process that the first virtual object is provided with the first virtual prop and moves, receiving touch operation conducted on a target control, and obtaining the operation direction of the touch operation conducted on a direction control, wherein the direction control is used for controlling the moving direction of the first virtual object;
the type determining submodule is used for determining the rolling type of the rolling action based on the operation direction;
and the action control sub-module is used for controlling the first virtual object to execute the rolling action based on the rolling type.
To sum up, according to the scheme shown in the embodiment of the present application, when the first virtual object is equipped with the first virtual item, the target virtual object with the first distance or less is automatically locked, and the first virtual object is controlled to automatically move to the target virtual object when the target operation is received, and when the distance between the target virtual object and the first virtual object is less than or equal to the second distance, the first virtual item is used for the target virtual object, so as to reduce the operation difficulty of the virtual object equipped with the first virtual item, thereby reducing the user operation time for using the first virtual object of the first virtual item to attack the target virtual object, thereby reducing the duration of a single virtual scene, and further saving the electric quantity and data traffic consumed by the terminal.
FIG. 14 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment. The computer apparatus 1400 includes a Central Processing Unit (CPU) 1401, a system Memory 1404 including a Random Access Memory (RAM) 1402 and a Read-Only Memory (ROM) 1403, and a system bus 1405 connecting the system Memory 1404 and the Central Processing Unit 1401. The computer device 1400 also includes a basic Input/Output system (I/O system) 1406 that facilitates transfer of information between devices within the computer device, and a mass storage device 1407 for storing an operating system 1413, application programs 1414, and other program modules 1415.
The basic input/output system 1406 includes a display 1408 for displaying information and an input device 1409, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1408 and input device 1409 are both connected to the central processing unit 1401 via an input-output controller 1410 connected to the system bus 1405. The basic input/output system 1406 may also include an input/output controller 1410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1407 is connected to the central processing unit 1401 through a mass storage controller (not shown) connected to the system bus 1405. The mass storage device 1407 and its associated computer device-readable media provide non-volatile storage for the computer device 1400. That is, the mass storage device 1407 may include a computer device readable medium (not shown) such as a hard disk or Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer device readable media may comprise computer device storage media and communication media. Computer device storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer device readable instructions, data structures, program modules or other data. Computer device storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM, Digital Video Disk (DVD), or other optical, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer device storage media is not limited to the foregoing. The system memory 1404 and mass storage device 1407 described above may collectively be referred to as memory.
The computer device 1400 may also operate as a remote computer device connected to a network through a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 1400 may be connected to the network 1412 through the network interface unit 1411 coupled to the system bus 1405, or may be connected to other types of networks or remote computer device systems (not shown) using the network interface unit 1411.
The memory further includes one or more programs, which are stored in the memory, and the central processing unit 1401 implements all or part of the steps of the method illustrated in fig. 3, 4, 5, or 6 by executing the one or more programs.
FIG. 15 is a block diagram illustrating the structure of a computer device 1500 according to an example embodiment. The computer device 1500 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1500 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement all or a portion of the steps in the methods provided by the method embodiments herein.
In some embodiments, computer device 1500 may also optionally include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, providing a front panel of the computer device 1500; in other embodiments, the display screens 1505 may be at least two, each disposed on a different surface of the computer device 1500 or in a folded design; in still other embodiments, the display 1505 may be a flexible display disposed on a curved surface or a folded surface of the computer device 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computing device 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
A Location component 1508 is used to locate the current geographic Location of the computer device 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.
The power supply 1509 is used to supply power to the various components in the computer device 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the computer device 1500, and the gyro sensor 1512 and the acceleration sensor 1511 cooperate to collect a 3D motion of the user on the computer device 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1513 may be disposed on the side bezel of computer device 1500 and/or underneath the touch display screen. When the pressure sensor 1513 is disposed on the side frame of the computer device 1500, the user's holding signal to the computer device 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the touch display screen, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the computer device 1500. When a physical key or vendor Logo is provided on the computer device 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, the processor 1501 may control the display brightness of the touch display screen based on the ambient light intensity collected by the optical sensor 1515. Specifically, when the ambient light intensity is higher, the display brightness of the touch display screen is increased; and when the ambient light intensity is lower, the display brightness of the touch display screen is reduced. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the computer device 1500. The proximity sensor 1516 is used to capture the distance between the user and the front of the computer device 1500. In one embodiment, the touch display screen is controlled by the processor 1501 to switch from the bright screen state to the rest screen state when the proximity sensor 1516 detects that the distance between the user and the front of the computer device 1500 is gradually decreasing; when the proximity sensor 1516 detects that the distance between the user and the front of the computer device 1500 is gradually increased, the processor 1501 controls the touch display to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 15 is not intended to be limiting of the computer device 1500, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3, 4, 5, or 6 is also provided. For example, the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the virtual object control method in the virtual scene provided in the various optional implementations of the above aspects.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (30)

1. A method for controlling a virtual object in a virtual scene, the method comprising:
responsive to a first virtual object being equipped with a first virtual prop and a distance between a target virtual object and the first virtual object being less than or equal to a first distance, locking the target virtual object;
in response to receiving a target operation, controlling the first virtual object to move towards the target virtual object; the target operation is an operation using the first virtual prop;
in response to the first virtual object moving to a distance less than or equal to a second distance from the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
2. The method of claim 1, wherein locking the target virtual object in response to the first virtual object being equipped with the first virtual prop and the distance between the target virtual object and the first virtual object being less than or equal to a first distance comprises:
in response to the first virtual object being equipped with the first virtual item, obtaining a set of alternate objects; the candidate object set is a set formed by second virtual objects, wherein the distance between the candidate object set and the position of the first virtual object at the current moment is smaller than or equal to the first distance;
determining the target virtual object from the set of candidates based on target attribute information of virtual objects in the set of candidates.
3. The method of claim 2, wherein in response to the target attribute information including distance information indicating a distance between the corresponding virtual object and the first virtual object;
the determining the target virtual object from the set of candidates based on the target attribute information of the virtual object in the set of candidates includes:
and taking the second virtual object which is closest to the first virtual object in the candidate object set as the target virtual object.
4. The method of claim 2, wherein in response to the target attribute information containing a first attribute value;
the locking the target virtual object from the set of candidates based on the target attribute information of the virtual object in the set of candidates includes:
and taking the second virtual object with the maximum or minimum corresponding first attribute value in the candidate object set as the target virtual object.
5. The method of claim 1, further comprising:
in response to the first virtual object not being equipped with the first virtual item, or in response to the first virtual object being equipped with the first virtual item and the first virtual item being in a throwable state, presenting a virtual scene screen at a first person perspective;
responding to the first virtual object to equip the first virtual prop, enabling the first virtual prop to be in a non-throwing state, and displaying a virtual scene picture according to a third person weighing visual angle.
6. The method of claim 1, wherein said controlling the first virtual object to act on the target virtual object using the first virtual prop in response to the first virtual object moving to a distance from the target virtual object that is less than or equal to a second distance comprises:
in response to the distance between the target virtual object and the first virtual object being less than or equal to the second distance and no obstruction being present between the first virtual object and the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
7. The method of claim 1, wherein said controlling the first virtual object to move towards the target virtual object in response to receiving the target operation comprises:
in response to receiving the target operation, controlling the first virtual object to move towards the target virtual object at a first speed; the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
8. The method of claim 7, wherein said controlling the first virtual object to move towards the target virtual object in response to receiving the target operation comprises:
in response to receiving the target operation, controlling the first virtual object to perform a first action in moving to the target virtual object.
9. The method of claim 8,
the first action is used for triggering deduction of a first attribute value of a third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
10. The method of claim 1, wherein in response to the first virtual object being in the process of moving, the method further comprises:
in response to receiving a touch operation on a target control in the process that the first virtual object is equipped with the first virtual prop and moves, controlling the first virtual object to execute a rolling action; the target control is a control for controlling scrolling.
11. The method of claim 10, wherein controlling the first virtual object to perform a scrolling action in response to receiving a touch operation on a target control while the first virtual object is equipped with the first virtual prop and moving comprises:
responding to the fact that in the process that the first virtual object is provided with the first virtual prop and moves, touch operation carried out on a target control is received, and obtaining the operation direction of the touch operation carried out on a direction control, wherein the direction control is used for controlling the moving direction of the first virtual object;
determining a roll type of the roll motion based on the operation direction;
controlling the first virtual object to perform the scrolling action based on the scroll type.
12. A method for controlling a virtual object in a virtual scene, the method comprising:
in response to the first virtual object being equipped with the first virtual item, presenting a virtual scene screen;
in response to that the distance between the target virtual object and the first virtual object is smaller than or equal to a first distance, displaying a target locking icon in an overlapping mode at the position of the target virtual object; the target-locking pattern is to indicate that the target virtual object is locked;
in response to receiving a target operation, controlling the first virtual object to move towards the target virtual object; the target operation is an operation using the first virtual prop;
in response to the first virtual object moving to a distance less than or equal to a second distance from the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
13. The method of claim 12, wherein prior to controlling the first virtual object to move toward the target virtual object in response to receiving the target operation, further comprising:
switching the first control to be a target control; the first control is a control for controlling squatting; the target control is a control for controlling scrolling.
14. A method for controlling a virtual object in a virtual scene, the method comprising:
in response to the first virtual object not being equipped with the first virtual prop, presenting a virtual scene picture at a first person perspective;
in response to receiving the first operation, displaying the virtual scene picture at a third person weighing view angle; the first operation is to control the first virtual object to equip the first virtual prop;
in response to receiving a target operation and the distance between the first virtual object and the target virtual object is smaller than or equal to a second distance, controlling the first virtual object to act on the target virtual object by using the first virtual prop; the target operation is an operation using the first virtual prop.
15. The method of claim 14, wherein, in response to receiving a target operation and the distance between the first virtual object and the target virtual object being less than or equal to a second distance, controlling the first virtual object to act on the target virtual object using the first virtual prop further comprises:
locking the target virtual object in response to a distance between the target virtual object and the first virtual object being less than or equal to a first distance.
16. The method of claim 15, wherein locking the target virtual object in response to the distance between the target virtual object and the first virtual object being less than or equal to a first distance comprises:
acquiring a candidate object set; the candidate object set is a set formed by second virtual objects, wherein the distance between the candidate object set and the position of the first virtual object at the current moment is smaller than or equal to the first distance;
determining the target virtual object from the set of candidates based on target attribute information of virtual objects in the set of candidates.
17. The method of claim 16, wherein in response to the target attribute information including distance information indicating a distance between the corresponding virtual object and the first virtual object;
the determining the target virtual object from the set of candidates based on the target attribute information of the virtual object in the set of candidates includes:
and taking the second virtual object which is closest to the first virtual object in the candidate object set as the target virtual object.
18. The method of claim 16, wherein in response to the target attribute information containing a first attribute value;
the locking the target virtual object from the set of candidates based on the target attribute information of the virtual object in the set of candidates includes:
and taking the second virtual object with the maximum or minimum corresponding first attribute value in the candidate object set as the target virtual object.
19. The method of claim 14, wherein presenting the virtual scene screen at a third person perspective view in response to receiving the first operation comprises:
responding to the first virtual object to equip the first virtual prop, wherein the first virtual prop is in a non-throwing state, and displaying the virtual scene picture according to a third person weighing visual angle.
20. The method of claim 14, further comprising:
in response to the first virtual object being equipped with the first virtual prop and the first virtual prop being in a throwable state, presenting the virtual scene picture at a first human perspective.
21. The method of claim 14, wherein, in response to receiving a target operation and a distance between the first virtual object and the target virtual object being less than or equal to a second distance, controlling the first virtual object to act on the target virtual object using the first virtual prop comprises:
in response to receiving the target operation, controlling the first virtual object to move towards the target virtual object;
in response to the first virtual object moving to a distance from the target virtual object that is less than or equal to the second distance, controlling the first virtual object to act on the target virtual object using the first virtual prop.
22. The method of claim 21, wherein said controlling the first virtual object to act on the target virtual object using the first virtual prop in response to the first virtual object moving to a distance from the target virtual object that is less than or equal to the second distance comprises:
in response to the distance between the target virtual object and the first virtual object being less than or equal to the second distance and no obstruction being present between the first virtual object and the target virtual object, controlling the first virtual object to act on the target virtual object using the first virtual prop.
23. The method of claim 21, wherein said controlling the first virtual object to move toward the target virtual object in response to receiving the target operation comprises:
in response to receiving the target operation, controlling the first virtual object to move towards the target virtual object at a first speed; the first speed is greater than a moving speed of the first virtual object when the specified operation is not received.
24. The method of claim 23, wherein said controlling the first virtual object to move toward the target virtual object in response to receiving the target operation comprises:
in response to receiving the target operation, controlling the first virtual object to perform a first action in moving to the target virtual object.
25. The method of claim 24, wherein the first action is to trigger deduction of a first attribute value of a third virtual object; the third virtual object is a virtual object outside the target virtual object and within the scope of the first action.
26. The method of claim 21, wherein in response to the first virtual object being in motion, the method further comprises:
in response to receiving a touch operation on a target control in the process that the first virtual object is equipped with the first virtual prop and moves, controlling the first virtual object to execute a rolling action; the target control is a control for controlling scrolling.
27. The method of claim 26, wherein controlling the first virtual object to perform a scrolling action in response to receiving a touch operation on a target control while the first virtual object is equipped with the first virtual prop and moving comprises:
responding to the fact that in the process that the first virtual object is provided with the first virtual prop and moves, touch operation carried out on a target control is received, and obtaining the operation direction of the touch operation carried out on a direction control, wherein the direction control is used for controlling the moving direction of the first virtual object;
determining a roll type of the roll motion based on the operation direction;
controlling the first virtual object to perform the scrolling action based on the scroll type.
28. An apparatus for controlling a virtual object in a virtual scene, the apparatus comprising:
a target locking module for locking a target virtual object in response to the first virtual object being equipped with a first virtual prop and a distance between the target virtual object and the first virtual object being less than or equal to a first distance;
an object moving module, configured to control the first virtual object to move to a target virtual object in response to receiving a target operation; the target operation is an operation using the first virtual prop;
and the prop using module is used for controlling the first virtual object to act on the target virtual object by using the first virtual prop in response to the first virtual object moving to the position where the distance between the first virtual object and the target virtual object is less than or equal to a second distance.
29. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement a method of virtual object control in a virtual scene as claimed in any one of claims 1 to 27.
30. A computer-readable storage medium, in which at least one computer program is stored, the computer program being loaded and executed by a processor to implement the method for controlling virtual objects in a virtual scene as claimed in any one of claims 1 to 27.
CN202011306335.8A 2020-11-19 2020-11-19 Virtual object control method, device, equipment and storage medium in virtual scene Active CN112402969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011306335.8A CN112402969B (en) 2020-11-19 2020-11-19 Virtual object control method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011306335.8A CN112402969B (en) 2020-11-19 2020-11-19 Virtual object control method, device, equipment and storage medium in virtual scene

Publications (2)

Publication Number Publication Date
CN112402969A true CN112402969A (en) 2021-02-26
CN112402969B CN112402969B (en) 2022-08-09

Family

ID=74773697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011306335.8A Active CN112402969B (en) 2020-11-19 2020-11-19 Virtual object control method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN112402969B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070270215A1 (en) * 2006-05-08 2007-11-22 Shigeru Miyamoto Method and apparatus for enhanced virtual camera control within 3d video games or other computer graphics presentations providing intelligent automatic 3d-assist for third person viewpoints
CN110917619A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN111589130A (en) * 2020-04-24 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070270215A1 (en) * 2006-05-08 2007-11-22 Shigeru Miyamoto Method and apparatus for enhanced virtual camera control within 3d video games or other computer graphics presentations providing intelligent automatic 3d-assist for third person viewpoints
CN110917619A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN111589130A (en) * 2020-04-24 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Also Published As

Publication number Publication date
CN112402969B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN108815851B (en) Interface display method, equipment and storage medium for shooting in virtual environment
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
JP7419382B2 (en) Method and apparatus and computer program for controlling a virtual object to mark a virtual item
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN109529319B (en) Display method and device of interface control and storage medium
US11656755B2 (en) Method and apparatus for controlling virtual object to drop virtual item and medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
WO2021143259A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN112169325B (en) Virtual prop control method and device, computer equipment and storage medium
CN111202975B (en) Method, device and equipment for controlling foresight in virtual scene and storage medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN110917619A (en) Interactive property control method, device, terminal and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN111475029B (en) Operation method, device, equipment and storage medium of virtual prop
CN111744184A (en) Control display method in virtual scene, computer equipment and storage medium
WO2021143253A1 (en) Method and apparatus for operating virtual prop in virtual environment, device, and readable medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium
CN112354180A (en) Method, device and equipment for updating integral in virtual scene and storage medium
CN112316421A (en) Equipment method, device, terminal and storage medium of virtual prop
CN112138374A (en) Virtual object attribute value control method, computer device, and storage medium
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium
CN111921190B (en) Prop equipment method, device, terminal and storage medium for virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038710

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant