CN112044073B - Using method, device, equipment and medium of virtual prop - Google Patents
Using method, device, equipment and medium of virtual prop Download PDFInfo
- Publication number
- CN112044073B CN112044073B CN202010948781.2A CN202010948781A CN112044073B CN 112044073 B CN112044073 B CN 112044073B CN 202010948781 A CN202010948781 A CN 202010948781A CN 112044073 B CN112044073 B CN 112044073B
- Authority
- CN
- China
- Prior art keywords
- virtual
- virtual character
- control
- main control
- master
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/837—Shooting of targets
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8076—Shooting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a using method, a using device, using equipment and using media of a virtual prop, and relates to the field of virtual environments. The method comprises the following steps: displaying a virtual environment picture, a first use control and a second use control, wherein the virtual environment picture comprises a main control virtual character, a first virtual prop is arranged on a first limb of the main control virtual character, and a second virtual prop is arranged on a second limb of the main control virtual character; controlling the main control virtual role to use the first virtual prop in response to receiving the triggering operation on the first use control; and controlling the main control virtual role to use the second virtual item in response to receiving the triggering operation on the second use control in the use process of the first virtual item. The method can improve the human-computer interaction efficiency of the user for controlling the virtual character to use the two props.
Description
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a using method, a using device, using equipment and using media of virtual props.
Background
In an application program based on a three-dimensional virtual environment, such as a first-person shooting game, a user can control a virtual character in the virtual environment to use a virtual prop, for example, the user can control the virtual character to shoot by using a gun prop; the hook claw can be used for hooking the building to the high position to pull the virtual character to move to the high position of the building.
In the related technology, the virtual prop equipped in the default state of the virtual character is a firearm prop, when a user wants to control the virtual character to use a claw, the user firstly needs to click an equipment control of the claw, and the virtual character is controlled to unload the firearm prop and equip the claw prop; and then, the launch control is required to be clicked to launch the hook claw to the target position, and the hook claw automatically pulls the virtual character to move to the target position after being fixed at the target position. After the movement is finished, the virtual role is automatically switched back to the firearm prop, and the user can control the virtual role to continue shooting by using the firearm prop.
According to the method in the related art, the operation that a user controls the virtual character to use the firearm prop and the hook claw is too complicated, the user needs to control the virtual character to switch equipment between the two props and then can use the props, and the man-machine interaction efficiency of the props is too low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a medium for using a virtual prop, and can improve the human-computer interaction efficiency of a user for controlling a virtual character to use two props. The technical scheme is as follows:
in one aspect, a method for using a virtual item is provided, where the method includes:
displaying a virtual environment picture, a first use control and a second use control, wherein the virtual environment picture comprises a main control virtual character, a first virtual prop is arranged on a first limb of the main control virtual character, and a second virtual prop is arranged on a second limb of the main control virtual character;
controlling the master virtual character to use the first virtual prop in response to receiving a triggering operation on the first use control;
and controlling the main control virtual role to use the second virtual item in response to receiving the triggering operation on the second use control in the use process of the first virtual item.
In another aspect, an apparatus for using a virtual prop is provided, the apparatus including:
the display module is used for displaying a virtual environment picture, a first using control and a second using control, wherein the virtual environment picture comprises a main control virtual role, a first virtual prop is arranged on a first limb of the main control virtual role, and a second virtual prop is arranged on a second limb of the main control virtual role;
the interaction module is used for receiving triggering operation on the first use control;
the control module is used for responding to the received trigger operation on the first using control and controlling the main control virtual role to use the first virtual prop;
the interaction module is further used for receiving a trigger operation on the second use control;
the control module is further configured to control the master virtual character to use the second virtual item in response to receiving a trigger operation on the second use control in the use process of the first virtual item.
In another aspect, there is provided a computer apparatus comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of using a virtual prop as described above.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement a method of using a virtual prop as described above.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the method for using the virtual item provided in the above-mentioned alternative implementation mode.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through making master control virtual role equip a virtual stage property respectively at two hands, let master control virtual role can equip two kinds of virtual stage properties simultaneously, master control virtual role when using certain kind of virtual stage property, need not carry out following loaded down with trivial details step: the method and the device have the advantages that a certain virtual prop is dismounted/retracted, another virtual prop is equipped, and then the prop can be used, so that the human-computer interaction efficiency of prop operation is improved. And the user can also control the main control virtual role to use another kind of stage property when controlling the main control virtual role to use a kind of virtual stage property, makes the main control virtual role can use two kinds of stage properties simultaneously, has improved the availability factor of two kinds of stage properties, and the main control virtual role can control a stage property by one hand, also more closes to the condition that uses two hands to control two different articles alone in reality, makes the use of main control virtual role to the stage property more real. Through two use controls of two virtual props respectively set up on the virtual environment picture, the user can use corresponding virtual props through triggering the use control that virtual props correspond. Compared with the method that only one prop using control is arranged, the current equipped props need to be switched in advance when different props are used; the use controls are respectively arranged for different virtual props, so that the main control virtual character does not need to be switched with the current prop, two props are directly equipped, the main control virtual character can be controlled to simultaneously use the two virtual props by directly using the different controls, the human-computer interaction efficiency of the use of the virtual props is improved, the time required for switching the main control virtual character between the two props is saved, and the efficiency of the main control virtual character in using the two virtual props is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 4 is a schematic view of a camera model corresponding to a perspective provided by another exemplary embodiment of the present application;
FIG. 5 is a schematic view of a user interface of a method of using a virtual prop provided in another exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 7 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 8 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 10 is a schematic view of a method of using a virtual prop provided in another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 12 is a schematic view of a method of using a virtual prop provided in another exemplary embodiment of the present application;
FIG. 13 is a schematic view of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 14 is a schematic view of a method of using a virtual prop provided in another exemplary embodiment of the present application;
FIG. 15 is a schematic view of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 16 is a method flow diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 17 is a schematic view of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 18 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 19 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 20 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 21 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 22 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 23 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 24 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 25 is a block diagram of an apparatus for using a virtual prop provided in accordance with another exemplary embodiment of the present application;
fig. 26 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional world, or a pure fictional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual roles: refers to a movable object in a virtual environment. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment. Illustratively, the virtual character has a life value, and when the life value of the virtual character is zero, the virtual character can not continue to move in the virtual world. For example, the life value is a criterion for determining whether the virtual character can move in the virtual world, and may also be referred to as a signal value, a red bar, and the like.
First Person shooter game (FPS): the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a first virtual character perspective. In the game, at least two virtual characters carry out a single-game fighting mode in a virtual environment, the virtual characters achieve the purpose of survival in the virtual environment by avoiding attacks launched by other virtual characters or/and dangers (such as poison circle, marshland, bomb and the like) existing in the virtual environment, when the life value of the virtual characters in the virtual environment is zero, the life of the virtual characters in the virtual environment is ended, and the virtual characters which finally survive in the virtual environment are winners. Optionally, each client may control one or more virtual characters in the virtual environment, with the time when the first client joins the battle as a start time and the time when the last client exits the battle as an end time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
UI (User Interface) controls, any visual controls or elements that can be seen on the User Interface of the application program, for example, controls such as a picture, an input box, a text box, a button, a label, and the like, wherein some of the UI controls respond to the operation of the User, for example, the User triggers the first use control to control the master avatar to use the first virtual prop. The UI control referred to in the embodiments of the present application includes, but is not limited to: a first usage control, a second usage control, an aiming control.
The method provided in the application can be applied to the application program with the virtual environment and the virtual role. Illustratively, an application that supports a virtual environment is one in which a user can control the movement of a virtual character within the virtual environment. By way of example, the methods provided herein may be applied to: any one of a Virtual Reality (VR) application program, an Augmented Reality (AR) program, a three-dimensional map program, a Virtual Reality Game, an Augmented Reality Game, a First-Person shooter Game (FPS), a Third-Person shooter Game (TPS), a Multiplayer Online tactical Game (MOBA), and a strategy Game (SLG).
Illustratively, the game in the virtual environment is composed of one or more maps of game worlds, the virtual environment in the game simulates the scene of the real world, the user can control the virtual character in the game to walk, run, jump, shoot, fight, drive, attack other virtual characters by using virtual weapons, and the like in the virtual environment, the interactivity is strong, and a plurality of users can form a competitive game in an online team.
In some embodiments, the application may be a shooting game, a racing game, a role playing game, an adventure game, a sandbox game, a tactical competitive game, or the like. The client can support at least one operating system of a Windows operating system, an apple operating system, an android operating system, an IOS operating system and a LINUX operating system, and the clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the client is a program adapted to a mobile terminal having a touch screen.
In some embodiments, the client is an application developed based on a three-dimensional engine, such as the three-dimensional engine being a Unity engine.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with a client supporting a virtual environment, such as a client of an application supporting a three-dimensional virtual environment. The application program may be any one of a Battle Royal (BR) game, a virtual reality application program, an augmented reality program, a three-dimensional map program, a third person shooter game, a first person shooter game, and a multiplayer online tactic competition game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 101, a touch screen 102, and a memory 103.
The processor 101 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capabilities.
The touch screen 102 includes a normal touch screen or a pressure sensitive touch screen. The normal touch screen can measure a pressing operation or a sliding operation applied to the touch screen 102; a pressure sensitive touch screen can measure the degree of pressure exerted on the touch screen 102.
The memory 103 stores an executable program of the processor 101. Illustratively, the memory 103 stores a virtual environment program a, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The virtual environment program a is an application program developed based on the three-dimensional virtual environment module 17. Optionally, the virtual environment program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, and a three-dimensional presentation program developed by a three-dimensional virtual environment module (also referred to as a virtual environment module) 17. For example, when the operating system of the terminal adopts an android operating system, the virtual environment program a is developed by adopting Java programming language and C # language; for another example, when the operating system of the terminal is the IOS operating system, the virtual environment program a is developed using the Object-C programming language and the C # language.
The three-dimensional Virtual environment module 17 is a module supporting multiple operating system platforms, and schematically, the three-dimensional Virtual environment module may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual environment module 17 is not limited in the embodiment of the present application, and in the following embodiment, the three-dimensional Virtual environment module 17 is a module developed by using a Unity engine as an example.
The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191, and optionally, the touch sensing module may not have a pressure sensing function and does not receive a pressure touch event. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.
Other drivers 192 may be drivers associated with the processor 101, drivers associated with the memory 103, drivers associated with network components, drivers associated with sound components, and the like.
Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal 210, server cluster 220.
The terminal 210 is installed and operated with a client 211 supporting a virtual environment, and the client 211 may be an application supporting a virtual environment. When the terminal runs the client 211, a user interface of the client 211 is displayed on the screen of the terminal 210. The client may be any one of an FPS game, a TPS game, an MOBA game, a tactical sports game, and an SLG game. In the present embodiment, the client is an FPS game for example. The terminal 210 is a terminal used by the first user 212, and the first user 212 uses the terminal 210 to control a master virtual character located in the virtual environment to perform an activity, where the master virtual character may be referred to as a master virtual character of the first user 212. Activities of the master virtual role include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the master avatar is an avatar, such as a simulated persona or an animated persona.
The device types of the terminal 210 include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only one terminal is shown in fig. 2, but there are a plurality of other terminals 240 in different embodiments. In some embodiments, there is at least one other terminal 240 corresponding to the developer, a development and editing platform of the client of the virtual environment is installed on the other terminal 240, the developer can edit and update the client on the other terminal 240, and transmit the updated client installation package to the server cluster 220 through a wired or wireless network, and the terminal 210 can download the client installation package from the server cluster 220 to update the client.
The terminal 210 and the other terminals 240 are connected to the server cluster 220 through a wireless network or a wired network.
The server cluster 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 220 is used to provide background services for clients that support a three-dimensional virtual environment. Optionally, the server cluster 220 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 220 undertakes the secondary computing work, and the terminal undertakes the primary computing work; or, the server cluster 220 and the terminal perform cooperative computing by using a distributed computing architecture.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 220 includes servers 221 and 226, where servers 221 include a processor 222, a user account database 223, a combat service module 224, and a user-oriented Input/Output Interface (I/O Interface) 225. The processor 222 is configured to load an instruction stored in the server 221, and process data in the user account database 221 and the combat service module 224; the user account database 221 is used for storing data of user accounts used by the terminal 210 and the other terminals 240, such as head images of the user accounts, nicknames of the user accounts, fighting capacity indexes of the user accounts, and service areas where the user accounts are located; the fight service module 224 is used for providing a plurality of fight rooms for the users to fight; the user-facing I/O interface 225 is used to establish communication with the terminal 210 through a wireless network or a wired network to exchange data.
With reference to the above description of the virtual environment and the description of the implementation environment, a method for using the virtual item provided in the embodiment of the present application is described, and an execution subject of the method is illustrated as a client running on the terminal shown in fig. 1. The terminal runs an application program, which is a program supporting a virtual environment.
Fig. 3 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment, and the method includes at least the following steps.
Step 301, displaying a virtual environment picture, a first use control and a second use control, where the virtual environment picture includes a master control virtual character, a first limb of the master control virtual character is equipped with a first virtual item, and a second limb of the master control virtual character is equipped with a second virtual item.
Illustratively, a first use control is for using a first virtual prop and a second use control is for using a second virtual prop.
Illustratively, after the battle is started, the client displays a user interface for the battle, which includes a virtual environment screen. For example, the UI controls may include a first use control for using a first virtual item, a second use control for using a second virtual item, and a third use control for using a third virtual item. Exemplary, the user interface of the battle may further include: a team forming interface for friend team forming, a matching interface for matching the virtual role with other virtual roles, a game loading interface for loading game information of the game, and the like.
Illustratively, after the game is started, the user controls the master control virtual role to move in the virtual environment, and the client observes the virtual environment from the perspective of the master control virtual role to acquire a virtual environment picture. The master virtual role in this embodiment is a virtual role controlled by the client.
The perspective refers to an observation angle when the first person perspective or the third person perspective of the master virtual character is observed in the virtual environment.
Optionally, the description is given by taking an example that the view angle is a view angle of a master virtual character, and in the embodiment of the present application, the view angle is an angle when the master virtual character is observed through a camera model in a virtual environment.
Optionally, the camera model automatically follows the master virtual character in the virtual environment, that is, when the position of the master virtual character in the virtual environment changes, the camera model changes simultaneously with the position of the master virtual character in the virtual environment, and the camera model is always within the preset distance range of the master virtual character in the virtual environment. Optionally, in the automatic following process, the relative positions of the camera model and the master virtual character do not change.
The camera model is a three-dimensional model positioned around a main control virtual character in a virtual environment, and when a first person visual angle is adopted, the camera model is positioned near the head of the main control virtual character or positioned at the head of the main control virtual character; when a third person name visual angle is adopted, the camera model can be located behind the main control virtual character and bound with the main control virtual character, and also can be located at any position away from the main control virtual character by a preset distance, the main control virtual character located in the virtual environment can be observed from different angles through the camera model, and optionally, when the third person name visual angle is the over-shoulder visual angle of the first person name, the camera model is located behind the main control virtual character (such as the head and the shoulder of the main control virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be positioned overhead of the head of the master virtual character when a top view is employed, which is a view of viewing the virtual environment from an overhead top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment view displayed by the user interface.
Explaining an example that the camera model is located at any position away from the main control virtual role by a preset distance, optionally, one main control virtual role corresponds to one camera model, and the camera model can rotate by taking the main control virtual role as a rotation center, for example: the camera model is rotated by taking any point of the main control virtual character as a rotation center, the camera model rotates in angle and also deviates in displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged in the rotation process, namely, the camera model rotates on the surface of a sphere taking the rotation center as the sphere center, wherein any point of the main control virtual character can be the head, the trunk or any point around the main control virtual character, and the embodiment of the application is not limited. Optionally, when the camera model observes the master control virtual character, the center of the view angle of the camera model points in the direction in which the point of the spherical surface where the camera model is located points at the center of the sphere.
Optionally, the camera model may further observe the master virtual character at a preset angle in different directions of the master virtual character.
Referring to fig. 4, schematically, a point is determined as a rotation center 12 in the master virtual character 11, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the master virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 4, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual environment displayed by the virtual environment screen includes: ladder, vertical ladder, climbing rock area, mountain, flat ground, river, lake, sea, desert, marsh, quicksand, sky, plant, building, vehicle.
Illustratively, the virtual environment picture includes a master virtual character, and when the virtual environment picture is a picture obtained from a first-person perspective of the master virtual character, the virtual environment picture includes a hand of the master virtual character; when the virtual environment picture is a picture acquired from the third person perspective of the main control virtual character, the virtual environment picture includes the upper body or the whole body of the main control virtual character.
Illustratively, two virtual props are respectively equipped on two limbs of the master virtual character, and the limbs of the master virtual character comprise: left upper limb, right upper limb, left lower limb, right lower limb. The first limb may be any of four limbs and the second limb may be any of four limbs, the first limb being different from the second limb. For example, in the case that the first virtual prop is equipped on the left upper limb of the main control virtual character, and the second virtual prop is equipped on the right upper limb of the main control virtual character, the first virtual prop may be equipped on any part of the left upper limb, such as the arm, hand, etc.; the second virtual prop may be provided at any part of the right upper limb, e.g., arm, hand, etc.
Illustratively, the first virtual prop and the second virtual prop are two different virtual props. Two different virtual properties are: the first virtual prop and the second virtual prop respectively occupy an equipment column of the master control virtual role. For example, the first virtual prop and the second virtual prop may be equipped on the limb by the master virtual character for use, for example, the master virtual character may be equipped with only the first virtual prop, or the master virtual character may be equipped with only the second virtual prop. Illustratively, the second virtual prop is a virtual prop which is equipped by default after the master virtual character enters the opposite office. For example, the first virtual prop may also be a virtual prop provided by a default after the master control virtual character enters the opposite office, or the first virtual prop is a virtual prop provided by a user after the master control virtual character enters the opposite office. For example, after entering the opposite, the master virtual character is equipped with a firearm prop (second virtual prop), and the master virtual character can pick up a traction-type prop (first virtual prop) from the virtual environment and then equip the traction-type prop (first virtual prop). Illustratively, the first virtual prop is also called a secondary weapon and the second virtual prop is also called a primary weapon.
Illustratively, the first virtual prop and the second virtual prop are virtual props with different functions or different effects. For example, the first virtual prop is a non-firearm prop and the second virtual prop is a firearm prop. For another example, the first virtual prop is a throw-type prop, and the second virtual prop is a launch-type prop. For another example, the first virtual prop is a prop for traction, and the second virtual prop is a prop for attack. For example, a first virtual prop may be: any one of hook claw, dart and grenade, the second virtual prop can be: any one of a submachine gun, a rifle and a pistol.
The first and second usage controls are UI controls displayed on top of the virtual environment screen. The first and second usage controls may be responsive to a user trigger. Illustratively, at least one first use control and at least one second use control are displayed on the virtual environment picture, and when a plurality of use controls are provided, a user can select to trigger any one of the use controls to realize a control effect.
For example, as shown in fig. 5, a left hand and a right hand of the master virtual character are displayed on the virtual environment screen, wherein the left hand of the master virtual character is equipped with a first virtual prop (claw) 401, the right hand is equipped with a second virtual prop (gun) 402, and a first use control 403 and a second use control 404 are also displayed on the virtual environment screen.
Illustratively, the first use control is for controlling the master virtual character to use the first virtual prop. For example, the user may control the master virtual character to use the first virtual prop by triggering the first use control, or may control the master virtual character to use the first virtual prop by an operation input through another input device, for example, a click operation of a mouse, a touch operation on a touch pad, a click operation on a keyboard, a voice instruction, an action recognition operation through a virtual reality or augmented reality apparatus, and the like.
Illustratively, the trigger operation may be any one of a click, a double click, a long press, a drag, and a slide. For example, in response to receiving a click operation on the first use control, the master virtual character is controlled to use the first virtual prop. For example, different trigger operations may trigger different effects of use of the first virtual prop. Taking the first virtual prop as an example, when the triggering operation is a click operation, the main control virtual role can be controlled to use the short-distance traction function of the first virtual prop, the hook claw of the short-distance traction function has a higher ejection speed (the time required for the hook claw to reach the target position is short), the distance of a traction rope is shorter, and the main control virtual role can be pulled to the target position with a shorter distance at a higher speed; when the triggering operation is a long-time pressing operation, the main control virtual character can be controlled to use the remote traction function of the first virtual prop, the hook claw of the remote traction function is low in ejection speed (the time required for the hook claw to reach the target position is long), the distance of the traction rope is long, and the main control virtual character can be pulled to the target position far away. For another example, when the triggering operation is a click operation, the master control virtual character can be controlled to use the traction function of the first virtual prop, the traction function can control the first virtual prop to eject out of the hook claw, and after the hook claw is fixed, a traction rope connected with the hook claw is automatically recovered to pull the master control virtual character to move to a target position; when the trigger operation is the double click operation, can control the main control virtual role and use the reverberation function of first virtual stage property, the reverberation function can control first virtual stage property and jet out the hook claw, after the hook claw is fixed, the haulage rope of hook claw back connection can not be retrieved automatically, the main control virtual role can use the position of hook claw as the centre of a circle, the length that uses the haulage rope is the radius and is gone back to the movement of waving, for example, wave to another building from a building, when the user double click first use controlling part once more, control first virtual stage property and withdraw haulage rope and hook claw.
Illustratively, the use of the first virtual prop is a longer duration process. For example, when the first virtual prop is a traction-type prop, the use of the first virtual prop includes two processes of ejecting the claw and dragging the master virtual character to move. For another example, when the first virtual prop is a shield prop, the using of the first virtual prop includes a shield raising process. For another example, when the first virtual item is a glider item, the use of the first virtual item includes a gliding process.
Illustratively, the master virtual character need not retract the second virtual prop when using the first virtual prop.
For example, in the process of using the first virtual prop, the master control virtual character may control the master control virtual character to use the second virtual prop by triggering the second use control, that is, the master control virtual character simultaneously uses the first virtual prop and the second virtual prop.
For example, the first usage control and the second usage control may also be the same UI control, and the control of the first virtual prop and the second virtual prop is implemented through different triggering manners. For example, the first use control and the second use control are both shooting controls on the user interface, when the user clicks the shooting control, the main control virtual character is controlled to use the second virtual prop, and when the user long-presses/double-clicks the shooting control, the main control virtual character is controlled to use the first virtual prop.
In summary, in the method provided in this embodiment, one virtual prop is respectively equipped on two hands by the master control virtual character, so that the master control virtual character can be equipped with two virtual props at the same time, and when using a certain virtual prop, the master control virtual character does not need to perform the following tedious steps: the method and the device have the advantages that a certain virtual prop is dismounted/retracted, another virtual prop is equipped, and then the prop can be used, so that the human-computer interaction efficiency of prop operation is improved. And the user can also control the main control virtual role to use another kind of stage property when controlling the main control virtual role to use a kind of virtual stage property, makes the main control virtual role can use two kinds of stage properties simultaneously, has improved the availability factor of two kinds of stage properties, and the main control virtual role can control a stage property by one hand, also more closes to the condition that uses two hands to control two different articles alone in reality, makes the use of main control virtual role to the stage property more real. Through two use controls of two virtual props respectively set up on the virtual environment picture, the user can use corresponding virtual props through triggering the use control that virtual props correspond. Compared with the method that only one prop using control is arranged, the current equipped props need to be switched in advance when different props are used; the use controls are respectively arranged for different virtual props, so that the main control virtual character does not need to be switched with the current prop, two props are directly equipped, the main control virtual character can be controlled to simultaneously use the two virtual props by directly using the different controls, the human-computer interaction efficiency of the use of the virtual props is improved, the time required for switching the main control virtual character between the two props is saved, and the efficiency of the main control virtual character in using the two virtual props is improved.
Illustratively, an exemplary embodiment is given in which a first virtual prop is a traction-type prop.
Fig. 6 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. Taking the execution subject of the method as an example of the client running on the terminal shown in fig. 1, where the client is a client supporting a virtual environment, step 302 further includes step 3021 and step 3022, and step 303 further includes step 3031 and step 3032 according to the exemplary embodiment shown in fig. 3.
And step 3021, in response to receiving the triggering operation on the first use control, controlling the main control virtual character to use the launching device and launching the launching component in the aiming direction of the aiming control.
Illustratively, the first virtual prop is a traction prop, and the first virtual prop includes a launching device, an ejection component and a traction component, the launching device is fixed on the first limb of the master control virtual character, the traction component is connected with the launching device, and the ejection component is connected with the traction component. Illustratively, a targeting control is also displayed on top of the virtual environment view.
Illustratively, the ejection member may be at least one of a hook, an end of a string carrying a weight, and a fixing device. For example, when the ejection member is a finger, the finger will automatically catch/engage the fixable surface when it hits the fixable surface after ejection. For another example, when the ejecting part is the end of a rope carrying a weight, the weight collides with the columnar object at the ejecting hook, and can automatically surround the columnar object and be locked, so that the rope is fixed on the columnar object. Illustratively, the traction member may be at least one of a traction rope, a wire.
For example, when the first virtual prop is used, the launching device launches the launching component, the launching component launches with the traction component when launching, the launching component can be automatically fixed at the impact position (target position) when impacting a fixable surface in the virtual environment, after the launching component is fixed, the traction component is automatically recovered, and the traction launching device is dragged to move towards the impact position. For example, the fixable surface refers to a surface that can be fixed by the shooting part in the virtual environment, each surface in the virtual environment is a fixable surface, and can be determined by whether the three-dimensional virtual model to which the surface belongs is a tractable object, for example, the surface of a building is a fixable surface, and the surface of a virtual character, a virtual vehicle, or the like is a non-fixable surface. When the injection part does not impact any surface or impacts a course supplementing fixed surface, the traction part is automatically recovered, the injection part is drawn to return to the injection device, and the user can trigger the first using control again to inject the injection part again.
For example, when the aiming control is aimed at the target position, the user triggers the first use control to use the first virtual prop, and the launching device launches the launching component in the direction of the target position. For example, the position of the aiming control on the user interface is fixed, and a user can change a virtual environment picture on the user interface by controlling the movement of the main control virtual character and controlling the rotation of the visual angle of the main control virtual character, so as to control the aiming control to aim at different positions in the virtual environment.
For example, as shown in fig. 7, a targeting control 405 is also displayed on the virtual environment screen, and the user triggers the first usage control 403 to control the launching device of the first virtual prop to launch the launching component 406 along the targeting direction of the targeting control 405. Illustratively, the injection of the injection member 406 flies out together with the traction member 407.
Illustratively, in response to receiving a trigger operation on the first use control, the client emits a first detection ray in the aiming direction of the aiming control with the position of the emitting device as a starting point; and controlling the ejection component to eject to the target position along the aiming direction and fix in response to the first detection ray intersecting the three-dimensional virtual model in the virtual environment at the target position within the maximum range.
Illustratively, the traction means has a maximum range, e.g., the maximum range of the traction means is 10 m. When a user triggers a first using control, a client fixes the position of an ejection part on an ejection device, and ejects a detection ray with the length equal to the maximum range to the direction aimed by the aiming control, if the detection ray collides with any surface in the virtual environment before reaching the maximum range, collision information of the current collision is obtained, and the collision information comprises the position (target position) of the collision point, the three-dimensional virtual model information of the collided surface and the like. The client determines whether the three-dimensional virtual model is an object which can be fixed by the ejection component or not according to the three-dimensional virtual model information on the collided surface, and controls the ejection component to eject to the target position along the aiming direction and fix the ejection component on the target position if the three-dimensional virtual model is the object which can be fixed by the ejection component; if the object is not the object which can be fixed by the injection component, the injection component is controlled to inject to the target position along the aiming direction and then fall off, and meanwhile, the traction component is controlled to pull the injection component back to the injection device.
Illustratively, the detection ray is used to determine a target position hit by the shooting component, and the detection ray is not visible on the virtual environment screen. The detection ray is a detection process of the target position performed by the client in the background of the client before the client displays the ejection component. For example, the logical operation of detecting the target position with the detection ray may be calculated by the client or may be calculated by the server. Illustratively, after the target position is obtained by the calculation based on the detection ray, a virtual environment screen is displayed in which the emitting member emits the target position in the aiming direction.
And step 3022, in response to the injection part injecting the target position and fixing, controlling the traction part to pull the main control virtual character to move to the target position.
Illustratively, after the injection component is fixed at the target position, the traction component can be automatically recovered to pull the main control virtual character to move to the target position. For example, the recovery device of the traction component may be disposed in the launching device, and after the ejection component is fixed at the target position, the recovery device automatically recovers the traction component.
For example, during any use of the first virtual item, the user may control the master virtual character to use the second virtual item while using the first virtual item by triggering the second use control.
For example, when the second virtual item is a firearm item, as shown in fig. 7, when a trigger operation on the second use control 404 is received while the first virtual item 401 is ejecting the ejection component (after the ejection component is ejected by the launching device, before the ejection component is fixed at the target position), the ejection component flies out and the master virtual character is controlled to shoot by using the second virtual item 402.
For example, when the second virtual prop is a firearm prop, the user may further control the master virtual character to shoot while the master virtual character is dragged to move to the target position.
For example, as shown in fig. 8, after the shooting component 406 is fixed at the target position, in the process that the traction component 407 pulls the main control virtual character to move to the target position, the client receives that the user triggers the second use control, and then the client controls the main control virtual character to shoot by using the second virtual prop 402 in the moving process.
In summary, in the method provided in this embodiment, the first virtual prop is a traction prop, and the traction prop includes a launching process and a traction process when in use, and when the user controls the master control virtual character to use the traction prop, the user can use the second virtual prop at the same time regardless of the launching process or the traction process, so that the master control virtual character can use two props at the same time. The master control virtual role can use the gun type prop to shoot in the air in the process of traction movement of the traction prop by using the traction prop, so that the controllability of the master control virtual role in the process of traction movement in the air is improved, a user can create a new shooting environment by combining the traction prop and the gun prop, and a new shooting mode is provided, thereby improving the fierce degree of game matching, reducing game matching time and reducing server load.
According to the method provided by the embodiment, the master control virtual character can shoot by using the firearm type prop in the process of flying by using the traction prop, so that the scene that a special soldier shoots by using the firearm in the process of flying in the air is simulated more truly, the control mode of a shooting program on the master control virtual character is enriched, the controllability of controlling the master control virtual character to move by a user is improved, and the human-computer interaction efficiency of controlling the master control virtual character by the user is improved.
The method comprises the following steps of providing an example embodiment that a main control virtual character automatically avoids obstacles in the process of being dragged by a traction component to move.
Fig. 9 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and step 304 is further included after step 303 based on the exemplary embodiment shown in fig. 3.
And 304, in response to the obstacle existing on the moving path of the main control virtual character moving to the target position, controlling the main control virtual character to avoid the obstacle and continue to move to the target position.
For example, since the movement of the master virtual character can be automatically controlled only according to the recovery of the traction unit while the master virtual character is being drawn by the traction unit to move to the target position, the user cannot control the movement of the master virtual character. Therefore, when an obstacle exists in the moving path of the main control virtual character, since the model of the main control virtual character cannot pass through the obstacle, a bug (defect) may be generated in which the main control virtual character is dragged by the traction component to impact the obstacle back and forth, a bug may also be generated in which the model of the main control virtual character passes through the model of the obstacle, and the problem that the traction movement of the main control virtual character is interrupted to cause the main control virtual character to fall may be caused. Therefore, in order to solve the above problem, embodiments of the present invention provide a method for detecting an obstacle on a moving path, and automatically change a moving direction/position of a main virtual character after detecting the obstacle, so as to enable the main virtual character to automatically avoid the obstacle.
Illustratively, the user cannot change the moving direction of the master virtual character during the process that the master virtual character is dragged to move, i.e., the method of step 304 is implemented by the client automatically controlling the master virtual character.
For example, the obstacle is an object that may collide with the three-dimensional virtual model of the master virtual character on the moving path of the master virtual character moving in the current moving direction. By way of example, an obstacle is an object that is not movable in the virtual environment, such as a building, terrain, tree, etc. For example, other virtual characters, virtual vehicles, virtual properties, etc. may or may not be obstacles, that is, the master virtual character may or may not avoid the virtual vehicle.
For example, as shown in fig. 10 (1), the injection unit is fixed at the target position 408, and the traction unit 407 pulls the master avatar 410 to move to the target position 408. If there is an obstacle 409 on the left side of the traction means, and the master avatar 410 continues to move in the current movement direction, the shoulder of the master avatar 410 collides with the obstacle as shown in fig. 10 (2), and therefore, as shown in fig. 10 (3), the master avatar is controlled to shift to the right side so as to avoid the obstacle 409, continues to move to the target position 408, and finally reaches the target position 408.
For example, a method for controlling a master virtual character to automatically avoid an obstacle is provided, as shown in fig. 11, step 304 further includes steps 3041 to 3044.
Step 3041, with the position of the master virtual character as a starting point, periodically emitting a collision box along a target direction pointing to the target position, the collision box being used to detect a collision with the three-dimensional virtual model in the virtual environment.
For example, the collision box is periodically ejected in the target direction from an arbitrary point on the three-dimensional virtual model of the master virtual character, for example, from the center point of the three-dimensional virtual model of the master virtual character.
Illustratively, a crash box is used for crash detection, the crash box not being visible on the virtual environment view. Exemplarily, the shape and the size of a collision box are determined according to the shape and the size of a three-dimensional virtual model of a main control virtual character, so that the collision box can represent the three-dimensional virtual model of the main control virtual character, then the collision box is periodically transmitted to the front of the current moving direction of the main control virtual character, the collision box is used for detecting that the main control virtual character cannot collide with other three-dimensional virtual models in a virtual environment when moving in the current moving direction, and if the collision box detects collision, an obstacle exists in front of the main control virtual character; if the collision box does not detect the obstacle, no obstacle exists in front of the main control virtual character.
Illustratively, the collision box is an invisible box, i.e., the collision box is not visible to the user in the virtual environment view. Illustratively, the size and shape of the crash box is set according to the size and shape of the three-dimensional virtual model of the master virtual character. For example, the size and shape of the collision box is the same as the size and shape of the three-dimensional virtual model of the master virtual character. Or the size of the collision box is slightly smaller than that of the three-dimensional virtual model of the main control virtual character. Or the size of the collision box is slightly larger than that of the three-dimensional virtual model of the main control virtual character, so that the collision box wraps the three-dimensional virtual model of the main control virtual character. Illustratively, to simplify the calculation process, the collision box is generally provided in a regular shape, for example, a cube, a cuboid, a sphere, a pyramid, a cylinder, and the like.
For example, the collision box may detect a collision in the virtual environment, and when the three-dimensional virtual model collides with a surface of the collision box, the collision box generates collision information, where the collision information includes: and at least one of information of the three-dimensional virtual model where the collision occurs, a collision point and a collision time. The information of the virtual model includes: the type of the three-dimensional virtual model, the size of the three-dimensional virtual model, the material of the three-dimensional virtual model, the position of the three-dimensional virtual model, and the shape of the three-dimensional virtual model. The client can acquire the relevant information of the collision point and the barrier according to the collision information generated after the collision of the collision box.
Illustratively, the ejection distance and the ejection period of the crash box may be arbitrarily configured, for example, the ejection distance of the crash box may be 1m, and the ejection period may be 0.1 s. Illustratively, the speed of the projection of the crash box is much higher than the speed of the movement of the master avatar toward the target location so that the crash box detects the obstacle before the master avatar hits the obstacle.
For example, as shown in fig. 12, a spherical crash box 411 is determined according to the size of the three-dimensional virtual model of the master virtual character 410, and then the crash box 411 is periodically projected toward the target direction in which the master virtual character is currently moving toward the target position, so that the crash box 411 detects an obstacle in front.
Step 3042, in response to the collision of the collision box with the three-dimensional virtual model of the obstacle, acquiring a collision point at which the collision occurs.
Illustratively, when the collision box collides with the three-dimensional virtual model of the obstacle, the collision box generates collision information of the current collision, and the client acquires a collision point according to the collision information.
For example, as shown in fig. 12, after the collision box 411 is ejected, it collides with the three-dimensional virtual model of the obstacle, and the collision point 412 is located on the right side of the collision box.
For example, the client may determine the position of the collision point relative to the target direction according to the collision point and the target direction (the straight line where the current movement path/traction component of the master virtual character is located). For example, the target direction here refers to a vector pointing to the target position by the master virtual character, or a straight line where the vector is located.
For example, when the collision point is located on the left side of the target direction, the avoiding direction of the master control virtual character is the right side; when the collision point is positioned on the right side of the target direction, the avoiding direction of the main control virtual role is the left side; when the collision point is positioned above the target direction, the avoiding direction of the main control virtual role is below; and when the collision point is positioned below the target direction, the avoiding direction of the main control virtual character is the upper direction.
For example, as shown in fig. 13 (1), the injection unit is fixed at the target position 408, and the traction unit 407 pulls the master avatar 410 to move to the target position 408. If there is an obstacle 409 below the traction means and the master avatar 410 continues to move in the current movement direction, the leg of the master avatar 410 collides with the obstacle 409 as shown in fig. 13 (2), and therefore, as shown in fig. 13 (3), the master avatar 410 is controlled to shift upward so as to avoid the obstacle 409, continues to move to the target position 408, and finally reaches the target position 408.
For another example, the client may use the collision point as a starting point to make a vector perpendicular to the target direction, where the vector is the avoiding direction. For example, as shown in FIG. 12, the collision point 412 is determined as a vector 414 perpendicular to the target direction 413, and the vector 414 is the evasive direction.
Illustratively, the client controls the main control virtual character to move towards the target position and move along the avoiding direction according to the avoiding direction. For example, the distance that the master virtual character moves in the avoiding direction may be arbitrary, for example, the client may determine the distance that the master virtual character needs to be avoided according to the position of the collision point on the collision box, or the client may control the master virtual character to move a fixed distance in the avoiding direction.
Taking the example that the client controls the main control virtual character to move for a fixed distance in the avoiding direction every time, when the collision box detects a collision, the client controls the main control virtual character to move for 0.1m in the avoiding direction while moving to the target position, then the client continues to transmit the collision box in the direction pointing to the target position according to the position of the main control virtual character after moving, continues to detect whether the collision box can collide with a collision object in the current direction, if the collision box can also collide with the collision object, continues to move for 0.1m in the new avoiding direction, and the steps are repeated until the collision box can not detect the collision.
For example, after the collision box detects a collision, the client may further obtain information of an obstacle, and when the obstacle is a movable object (e.g., a balloon, a leaf, or the like), the client does not control the main control virtual character to avoid the obstacle, but keeps the original moving direction to continue moving to the target position; when the obstacle is an immovable object (such as a building, a tree, and the like), the client controls the master virtual character to move in the avoiding direction to avoid the obstacle.
Illustratively, the method for avoiding the obstacle provided by this embodiment is further adapted to detect whether the master virtual character can pass through the narrow passage.
For example, as shown in (1) of fig. 14, there is one obstacle 409 on each side of the target position 408, and the master virtual character 410 continuously launches the collision box forward while being dragged by the dragging part to move to the target position 408, so as to detect whether there is an obstacle on the forward moving path. As shown in fig. 14 (2), when the main virtual character 410 continues to move forward in the current movement direction, the shoulder of the main virtual character 410 collides with the obstacle 409, and after the collision box detects the collision in advance, the main virtual character 410 moves in the avoidance direction according to the position of the collision point before the main virtual character 410 collides with the obstacle, and as shown in fig. 14 (3), the main virtual character 410 moves to the right side so that the main virtual character can avoid the obstacle on the left side. For example, if the distance that the main virtual character 410 moves to the right is too large, the collision box may also detect a collision with an obstacle on the right, and may control the main virtual character to move to the left to avoid the obstacle on the right, and the position of the main virtual character is continuously corrected by using a collision point detected by the collision box, so that the main virtual character may reach the target position. For example, if the collision box and the obstacles on the two sides generate collision points, which indicates that the main control virtual character cannot pass through the position, the traction component is controlled to stop traction of the main control virtual character and to retract the transmitting component when the three-dimensional virtual model of the main control virtual character collides with the obstacle.
As shown in fig. 15 (1), an obstacle 409 exists on each of the upper and lower sides of the target position 408, and the main control avatar 410 continuously launches a collision box forward while being drawn by the drawing means to move to the target position 408, so as to detect whether an obstacle exists on the forward moving path. As shown in fig. 15 (2), when the main control virtual character 410 continues to move forward in the current movement direction, the foot of the main control virtual character 410 collides with the obstacle 409 therebelow, and after the collision box detects the collision in advance, the main control virtual character 410 is moved in the avoidance direction according to the position of the collision point before the main control virtual character 410 collides with the obstacle 409, and as shown in fig. 15 (3), the main control virtual character 410 is moved upward so that the main control virtual character 410 can avoid the obstacle therebelow. For example, if the distance that the main virtual character 410 moves upward is too large, the collision box may also detect a collision with an upper obstacle, and the main virtual character may be controlled to move downward to avoid the upper obstacle, and the position of the main virtual character may be continuously corrected by using a collision point detected by the collision box, so that the main virtual character may reach the target position.
In summary, in the method provided in this embodiment, in the process of the traction prop drawing the master control virtual character to move, the master control virtual character is controlled to actively avoid the obstacle in the flight process, so that the master control virtual character is prevented from stopping the traction process due to hitting the obstacle in the flight process, the success rate of the master control virtual character using the traction prop to reach the target position is improved, the operation difficulty of the user using the traction prop is simplified, and the human-computer interaction efficiency is improved.
According to the method provided by the embodiment, the collision box is periodically transmitted to the front of the main control virtual character in the moving direction, and the collision box is used for detecting the barrier on the moving path, so that whether the main control virtual character collides with the barrier is judged in advance, the position of the main control virtual character is adjusted according to the collision point on the collision box, the main control virtual character can avoid the barrier and continuously move to the target position, the collision between the main control virtual character and the barrier is avoided, and the success rate of traction props for drawing the main control virtual character to the target position is improved.
For example, an exemplary embodiment for limiting the perspective rotation of the master virtual character during the towing process is provided.
Fig. 16 is a flowchart of a method for using a virtual prop according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and based on the exemplary embodiment shown in fig. 3, step 303 further includes steps 305 to 307.
For example, when a person uses a hook to perform traction movement in reality, due to action limitation, the human body is difficult to rotate in the movement process, and the rotation angle of the head is limited, so that in order to simulate the visual angle range of the person in the traction process more truly, the visual angle rotation angle of the main control virtual character in the traction movement process is limited. In the process that the main control virtual character is dragged by the traction component to move to the target position, the client side can continuously monitor whether the current visual angle direction of the main control virtual character is over-turned.
Illustratively, as shown in fig. 5, the user may adjust the viewing direction of the master virtual character by sliding within the area 415, thereby better observing the surrounding situation.
Illustratively, the client acquires a first view direction of the master virtual character in the moving process. The view direction refers to a direction pointed by the view angle of the master virtual character, and exemplarily, the view direction refers to a component of the view angle of the master virtual character in the horizontal direction, that is, only horizontal rotation of the view angle of the master virtual character in the horizontal direction is limited, and vertical rotation of the view angle of the master virtual character in the vertical direction is not limited (the master virtual character can be raised head. Illustratively, the view direction is a shooting direction (a component of the shooting direction in the horizontal direction) of the camera model to which the master virtual character is bound.
The first visual angle direction is the visual angle direction of the main control virtual character at any time in the traction movement process.
Step 306, in response to that a first turning included angle between the first view angle direction and the moving direction is greater than a first angle threshold, timing a first duration that the first turning included angle is greater than the first angle threshold, where the moving direction is a direction from the position of the main control virtual character to the target position.
The client calculates the size of a first steering included angle of the first visual angle direction deviating from the moving direction. For example, the direction (moving direction) in which the main control virtual character is moved by the traction unit is taken as a reference direction, and whether the current view direction of the main control virtual character deviates from the moving direction too much is calculated. And if the first steering angle is too large, the duration of the too large steering angle of the main control virtual character during accounting is prolonged. Illustratively, the client side can acquire the visual angle direction of the main control virtual character in real time, judge whether a first steering included angle between the current visual angle direction and the moving direction is smaller than a first angle threshold value or not in real time, and continuously time a first duration if the current visual angle direction is always larger than the first angle threshold value. And if the user rotates the visual angle direction of the main control virtual role back to the range of the first angle threshold value, stopping timing of the first duration.
For example, as shown in fig. 17, the master avatar 410 is dragged by the dragging part to move to the target position 408, and if the user does not change the perspective direction of the master avatar, the perspective direction (first perspective direction) 418 of the master avatar is the same as the moving direction 417 of the master avatar to move to the target position; if the user adjusts the view direction of the master virtual character, the view direction (first view direction) 418 of the master virtual character and the moving direction 417 of the master virtual character moving to the target position generate a first steering included angle 416, the client determines whether the first steering included angle 416 is too large (larger than a first angle threshold), and if so, starts timing a first duration, where the first duration is used to time a duration that the first steering included angle exceeds the first angle threshold.
For example, the first angle threshold may be arbitrarily configured as desired. For example, the first angle threshold may be 30 ° or 50 °.
For example, the moving direction may not be the actual moving direction of the master virtual character, but may be a direction in which the position of the master virtual character points to the target position, that is, a direction on a straight line on which the traction unit is located.
And 307, in response to the first duration being greater than the first time threshold, controlling the traction component to stop traction of the master control virtual role.
When the first steering included angle of the main control virtual character exceeds the first angle threshold value for a long time, the traction movement of the main control virtual character is stopped, the ejection component automatically falls off, and the traction component is controlled to retract the ejection component.
For example, the user can also use the mode to control the main virtual character to stop the traction movement during the traction movement, so as to better control the main virtual character to move to the desired position. For example, the user may also actively terminate the traction movement by again triggering the first usage control of the first virtual prop during the traction movement.
Illustratively, the determination of the angle of turning is performed for the angle of turning of the main virtual character during the traction movement, i.e. after the feet of the main virtual character have been lifted off the ground, the steps 305 to 307 are performed. For example, whether to terminate the traction movement can be determined according to the rotation amount of the angle of view turning at the moment when the main control virtual character leaves the ground with both feet.
Fig. 18 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. Based on the exemplary embodiment shown in fig. 3, step 303 is followed by steps 308 to 310. Illustratively, steps 308 through 310 are performed before steps 305 through 307.
Step 308, a second visual angle direction of the main control virtual character is obtained, wherein the second visual angle direction is a visual angle of the main control virtual character at the moment when the main control virtual character starts to be dragged by the traction component.
Illustratively, the second perspective direction is the perspective direction of the master virtual character at the instant the master virtual character is bipedal away. Illustratively, terminating the drag process of the master avatar may be expedited if the master avatar has already deflected too much from the initial perspective.
For example, the determination of the angle of view steering angle of the master virtual character at the moment of leaving the ground is more severe than the determination during the traction process, because the state of the master virtual character at the moment of leaving the ground greatly affects the subsequent traction flight state, for example, if the master virtual character has a tangential acceleration before leaving the ground, the path of the traction of the master virtual character by the traction component is affected, so that the master virtual character is more easily separated from the traction of the traction component. Thus, the second angle threshold may be less than the first angle threshold, or the second time threshold may be less than the first time threshold.
Illustratively, if the angle of turning of the angle of the main control virtual character at the moment of leaving the feet from the ground is too large, the client may start timing a second duration, if the user turns the direction of the angle of the main control virtual character back to the range of the second angle threshold, the traction movement may not be affected, and if the main control virtual character keeps the direction of the first angle exceeding the range of the second angle threshold for a long time, the client may terminate the traction movement of the main control virtual character.
And step 310, in response to the second duration being greater than the second time threshold, controlling the traction unit to stop traction of the master control virtual role.
In summary, in the method provided in this embodiment, when the main control virtual character rotates too much in the traction process, and keeps the too big angle for a period of time, the main control virtual character is controlled to stop the traction process. Because in actual life, when people use props such as hooks to pull and move, people can not rotate by a large angle, therefore, the rotation of the visual angle of the main control virtual role in the process of using the traction props is limited by setting the maximum angle of the rotation of the visual angle and maintaining the time threshold of the maximum angle, so that the use of the traction props is more in line with the actual situation, and the simulation truth of the use process of the traction props is improved.
According to the method provided by the embodiment, the visual angle turning of the master control virtual character at the moment of leaving the ground by two feet is strictly limited, so that a scene that a person uses a grapple to perform traction movement is simulated more truly, the difficulty of the master control virtual character in traction movement by turning at a larger visual angle is improved, the visual angle range of a user at the moment of leaving the ground by two feet is limited, and the user cannot observe surrounding conditions randomly in the traction movement process.
An exemplary embodiment is given in which the rollover is performed automatically after the end of the towing movement.
Fig. 19 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and step 311 is further included after step 303 based on the exemplary embodiment shown in fig. 3.
And 311, in response to that the main control virtual character reaches the target position and that the traversable terrain exists above the target position, controlling the main control virtual character to cross the traversable terrain.
For example, after the main control virtual character is dragged to reach the target position by using the traction prop, the main control virtual character is usually in a suspended state, and after the ejection part is retracted, the main control virtual character can freely fall due to suspension, so that a user cannot effectively reach a high position by using the traction prop. Therefore, the embodiment provides a method, after the main control virtual character reaches the target position by using the traction prop, whether a terrain over which the main control virtual character can climb over, such as a planar terrain including a platform, a roof, a corridor, and the like, exists above the target position is automatically detected, and when the terrain over which the main control virtual character can climb over exists, the main control virtual character is automatically controlled to climb over the terrain, so that the main control virtual character is prevented from falling freely after reaching the target position.
By way of example, traversable terrain refers to terrain having a slope less than a threshold, i.e., the traversable terrain is more gradual. And if the more gentle traversable terrain exists above the target position, controlling the main control virtual character to cross the traversable terrain, so that the main control virtual character reaches the traversable terrain.
For example, as shown in fig. 20, if the target position is an outer wall of a building, after the master virtual character 410 reaches the target position, the client detects whether there is a traversable terrain above the target position, for example, if there is a traversable platform 419 above the target position, the client controls the master virtual character 410 to automatically climb over the platform 419, so that the master virtual character 410 reaches the platform 419, and the master virtual character 410 is prevented from falling freely due to no landing foot of the plane after reaching the target position.
For example, the method for automatically turning over provided by this embodiment does not require a user to perform a specified operation, for example, does not require the user to throw the master virtual character onto a plane by using the moving inertia of the master virtual character, and the method provided by this embodiment automatically detects the terrain near the target position by the client, and automatically controls the master virtual character to perform a turning over action when there is a traversable terrain.
For example, this embodiment further provides a method for detecting a traversable terrain, and as shown in fig. 21, step 311 further includes steps 3111 to 3114.
Step 3111, in response to the master virtual character reaching the target position, emitting a second detection ray below the master virtual character.
For example, after the master virtual character reaches the target location, the client first detects whether the lower part of the master virtual character is suspended. The client transmits a second detection ray to the lower side of the main control virtual character, wherein the second detection ray is used for detecting collision.
Illustratively, the emission direction of the second detection ray is downward perpendicular to the horizontal plane.
Illustratively, the target surface is a surface of any three-dimensional virtual model located below the master virtual character, for example, the target surface may be a surface of the ground, a surface of a virtual vehicle, a surface of a building, and the like.
Illustratively, after the second detection ray collides with the target surface, the client acquires the length of the second detection ray, and the length of the second detection ray is the linear distance from the main control virtual character to the target surface.
Illustratively, the client judges whether the current position of the main control virtual role is too high according to the ray length, and further determines whether to control the main control virtual role to automatically cross.
Step 3113, in response to the length of the ray being greater than the first distance threshold, obtaining model information of a terrain model to which the target position belongs.
For example, when the length of the ray is too long, the client may automatically obtain model information of the terrain model to which the target location belongs. The model information includes the shape, size, coordinates of each plane on the model, height, etc. of the current terrain model. For example, the client may determine whether a terrain suitable for the main control virtual character to cross exists above the target position according to the model information, that is, determine whether a plane with an inclination angle smaller than a threshold exists above the target position, and calculate a height from the target position to the traversable terrain.
When the traversable terrain exists above the target position and the vertical distance from the target position to the traversable terrain is smaller than the distance threshold, the client controls the main control virtual character to cross the traversable terrain, so that the main control virtual character stands on the gentle surface.
For example, the second distance threshold may be set according to a height of the three-dimensional virtual model of the master virtual character, thereby facilitating determination of whether the reversible terrain can be reached by the height of the master virtual character.
For example, as shown in fig. 22, after the main control virtual character 410 reaches the target position, the client first emits the second detection ray 420 downward from the position of the main control virtual character 410, and if the second detection ray 420 detects the target plane (ground), the ray length of the second detection ray is obtained, where the ray length is the height from the main control virtual character to the ground, and if the ray length is too large, the model information of the terrain model to which the target position belongs is obtained, and it is determined whether there is a traversable terrain above the target position. Exemplarily, there is traversable terrain above the target position: and when the vertical distance from the target position to the platform 421 is smaller than the height of the main virtual character, the main virtual character 410 is controlled to move over the platform 421 and reach the platform 421.
In summary, in the method provided in this embodiment, after the main control avatar reaches the target position, whether a traversable terrain, such as a platform, a roof, a window, or the like, exists above the target position is automatically detected, and when the traversable terrain exists above the target position, the main control avatar is automatically controlled to traverse, so that the main control avatar is prevented from falling due to the fact that the target position is too empty, and the main control avatar cannot fall.
By way of example, an exemplary embodiment is presented that enables simultaneous use of a primary weapon and a secondary weapon by the left and right hands in a shooting game.
Fig. 23 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. The method comprises the following steps.
In step 501, secondary weapons are derived from the primary weapons.
Illustratively, the secondary weapons are derived from the primary weapons based on the original weapon frame of the shooting game.
Illustratively, the client loads models of various secondary weapons in the secondary weapons, initializes model data of the server, configures parameters of the secondary weapons, loads secondary weapons actions, and the like. For example, the server is a traction prop, and the client loads a model of the traction prop, configures the maximum range of the traction prop, and loads actions when the traction prop is used.
And 503, when the client actively releases the secondary weapon, hanging the secondary weapon in the slot corresponding to the left hand and launching the secondary weapon.
Illustratively, when the secondary weapon is a traction prop, the traction prop is armed on the left hand of the master virtual character. When the user triggers the use control of the secondary weapon, the ejection part is controlled to eject from the ejection device. Illustratively, the action of the main control virtual character emitting component, the animation emitted by the emitting component from the emitting device and played are loaded.
The master weapon may also request a shot from the server, step 504, during this process.
For example, during the firing of the secondary weapon, the user may continue to trigger the use control of the primary weapon, sending a request for use of the primary weapon to the server. After receiving the use request of the main weapon, the server calculates the use logic of the main weapon, returns the logic calculation result to the client and displays the logic calculation result by the client.
And 505, achieving the effect that the left hand and the right hand use the weapon simultaneously.
Fig. 24 is a flowchart of a method for using a virtual prop according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. The method comprises the following steps.
The client loads the left-handed weapon and the action, and illustratively, the client stores the left-handed action in the action control class of the master weapon. The motion control class of the master weapon is primarily used to control the play of the right-hand weapon motion.
And step 602, driving the left hand to act when the left-hand weapon is released.
Illustratively, when the master virtual character uses a left-hand weapon, the corresponding left-hand weapon action in the action control class of the master weapon is driven, and the release action of the left-hand weapon is played.
And step 603, driving the right hand to act when the right hand weapon is released.
Illustratively, when the master virtual character uses a right-hand weapon, the corresponding right-hand weapon action in the action control class of the master weapon is driven, and the release action of the right-hand weapon is played.
At step 604, the program layer controls the action control class of the primary weapon.
Illustratively, the client can respectively control the actions of the left-handed weapon and the right-handed weapon by controlling the action control class of the main weapon, so as to realize action playing.
In summary, the method provided in the present embodiment adds a secondary weapon procedure to the original weapon frame by deriving a secondary weapon from the primary weapon, and configures models, configurations, actions, and the like of the secondary weapon in the secondary weapon, thereby realizing the use of the secondary weapon.
In the following, embodiments of the apparatus of the present application are referred to, and for details not described in detail in the embodiments of the apparatus, the above-described embodiments of the method can be referred to.
Fig. 25 is a block diagram of a device for using a virtual item provided in an exemplary embodiment of the present application. The device comprises:
the display module 701 is configured to display a virtual environment picture, a first use control and a second use control, where the virtual environment picture includes a master control virtual character, a first virtual item is installed on a first limb of the master control virtual character, and a second virtual item is installed on a second limb of the master control virtual character;
an interaction module 702, configured to receive a trigger operation on a first usage control;
a control module 703, configured to control the master virtual character to use the first virtual item in response to receiving a trigger operation on the first use control;
the interaction module 702 is further configured to receive a trigger operation on a second usage control;
the control module 703 is further configured to control the master virtual character to use the second virtual item in response to receiving a trigger operation on the second use control in the use process of the first virtual item.
In an optional embodiment, a aiming control is further displayed on the virtual environment picture; the first virtual prop comprises a transmitting device, a transmitting part and a traction part, wherein the transmitting device is fixed on the first limb of the main control virtual character, the traction part is connected with the transmitting device, and the transmitting part is connected with the traction part;
the control module 703 is further configured to, in response to receiving a trigger operation on the first use control, control the main control virtual character to use the launching device to launch the launch component in the aiming direction of the aiming control;
the control module 703 is further configured to control the traction unit to pull the main control virtual character to move to the target position in response to the ejection unit ejecting the target position and fixing.
In an optional embodiment, the apparatus further comprises:
a ray detection module 704, configured to emit a first detection ray along the aiming direction of the aiming control with a position of the emitting device as a starting point in response to receiving a trigger operation on the first usage control;
the control module 703 is further configured to control the emitting component to emit the first detection ray to the target location along the aiming direction and fix the first detection ray in response to the first detection ray intersecting the three-dimensional virtual model in the virtual environment at the target location within the maximum range.
In an optional embodiment, the control module 703 is further configured to control the master virtual character to use the second virtual prop in response to receiving a trigger operation on the second use control in the process of being ejected by the ejection component;
or the like, or, alternatively,
the control module 703 is further configured to control the master control avatar to use the second virtual item in response to receiving a trigger operation on the second use control in a process that the traction component pulls the master control avatar to move.
In an optional embodiment, the control module 703 is further configured to, in response to that an obstacle exists on a moving path of the master virtual character to the target location, control the master virtual character to avoid the obstacle and continue to move to the target location.
In an optional embodiment, the apparatus further comprises:
a collision detection module 705, configured to periodically eject a collision box from the position of the master virtual character as a starting point along a target direction pointing to the target position, where the collision box is configured to detect a collision with a three-dimensional virtual model in a virtual environment;
an obtaining module 706, configured to obtain a collision point of a collision in response to the collision of the collision box with the three-dimensional virtual model of the obstacle;
a determining module 707, configured to determine, according to the collision point, an avoidance direction of the master virtual character;
the control module 703 is further configured to control the master virtual character to avoid the obstacle and continue to move to the target position according to the avoiding direction.
In an optional embodiment, the apparatus further comprises:
an obtaining module 706, configured to obtain a first view direction in a process that the master virtual character moves to the target position;
a timing module 708, configured to time a first duration that a first steering included angle between the first perspective direction and a moving direction is greater than a first angle threshold, where the moving direction is a direction from the position of the master virtual character to the target position;
the control module 703 is further configured to control the traction component to stop traction of the master virtual role in response to the first duration being greater than a first time threshold.
In an optional embodiment, the apparatus further comprises:
an obtaining module 706, configured to obtain a second view direction of the master virtual character, where the second view direction is a view of the master virtual character at a time when the master virtual character starts to be dragged by the dragging component;
a timing module 708, configured to time a second duration that a second steering included angle between the second perspective direction and a moving direction is greater than a second angle threshold in response to the second steering included angle being greater than the second angle threshold, where the moving direction is a direction from the position of the master virtual character to the target position;
the control module 703 is further configured to control the traction component to stop traction of the master virtual role in response to that the second duration is greater than a second time threshold.
In an optional embodiment, the control module 703 is further configured to control the master virtual character to climb over the traversable terrain in response to the master virtual character reaching the target location and existence of traversable terrain above the target location.
In an optional embodiment, the apparatus further comprises:
a ray detection module 704, configured to send a second detection ray to a position below the master virtual character in response to the master virtual character reaching the target position;
an obtaining module 706, configured to, in response to the second detection ray detecting the target surface, obtain a ray length of the second detection ray;
the obtaining module 706 is further configured to obtain model information of a terrain model to which the target position belongs in response to the ray length being greater than a first distance threshold;
the control module 703 is further configured to control the master virtual character to cross the traversable terrain in response to the model information including the traversable terrain located above the target location and a vertical distance between the target location and the traversable terrain being less than a second distance threshold.
It should be noted that: the use device of the virtual item provided in the above embodiment is exemplified by only the division of the above functional modules, and in practical applications, the above function allocation can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the use device of the virtual prop and the use method of the virtual prop provided by the above embodiments belong to the same concept, and the specific implementation process is described in detail in the method embodiments and will not be described herein again.
Fig. 26 is a block diagram illustrating a structure of a terminal 2000 according to an exemplary embodiment of the present application. The terminal 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 2000 includes: a processor 2001, and a memory 2002.
The memory 2002 may include one or more computer-readable storage media, which may be non-transitory. The memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement a method of using virtual props provided by method embodiments herein.
In some embodiments, terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002 and peripheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2004, a display screen 2005, a camera assembly 2006, an audio circuit 2007 and a power supply 2008.
The peripheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2001 and the memory 2002. In some embodiments, the processor 2001, memory 2002 and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2001, the memory 2002, and the peripheral interface 2003 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2005 is a touch display screen, the display screen 2005 also has the ability to capture touch signals on or over the surface of the display screen 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 2005 may be one, providing the front panel of terminal 2000; in other embodiments, the display screens 2005 can be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in still other embodiments, display 2005 may be a flexible display, disposed on a curved surface or a folded surface of terminal 2000. Even more, the display screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
The audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing or inputting the electric signals to the radio frequency circuit 2004 so as to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different positions of the terminal 2000. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2007 may also include a headphone jack.
In some embodiments, terminal 2000 also includes one or more sensors 2009. The one or more sensors 2009 include, but are not limited to: acceleration sensor 2010, gyro sensor 2011, pressure sensor 2012, optical sensor 2013, and proximity sensor 2014.
The acceleration sensor 2010 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2000. For example, the acceleration sensor 2010 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2001 may control the display screen 2005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2010. The acceleration sensor 2010 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2011 may detect a body direction and a rotation angle of the terminal 2000, and the gyro sensor 2011 may cooperate with the acceleration sensor 2010 to acquire a 3D motion of the user on the terminal 2000. The processor 2001 may implement the following functions according to the data collected by the gyro sensor 2011: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2012 may be disposed on the side frames of terminal 2000 and/or underlying display screen 2005. When the pressure sensor 2012 is disposed at a side frame of the terminal 2000, a holding signal of the user to the terminal 2000 can be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2012. When the pressure sensor 2012 is disposed at a lower layer of the display screen 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 2013 is used for collecting the intensity of ambient light. In one embodiment, the processor 2001 may control the display brightness of the display screen 2005 according to the ambient light intensity collected by the optical sensor 2013. Specifically, when the ambient light intensity is high, the display luminance of the display screen 2005 is adjusted high; when the ambient light intensity is low, the display luminance of the display screen 2005 is adjusted down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 according to the ambient light intensity collected by the optical sensor 2013.
Proximity sensor 2014, also known as a distance sensor, is typically disposed on the front panel of terminal 2000. The proximity sensor 2014 is used to collect a distance between the user and the front surface of the terminal 2000. In one embodiment, when the proximity sensor 2014 detects that the distance between the user and the front face of the terminal 2000 is gradually reduced, the processor 2001 controls the display screen 2005 to be switched from the bright screen state to the dark screen state; when the proximity sensor 2014 detects that the distance between the user and the front surface of the terminal 2000 is gradually increased, the display screen 2005 is controlled by the processor 2001 to be switched from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 20 is not intended to be limiting of terminal 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present application further provides a computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method for using the virtual prop provided in any of the above exemplary embodiments.
The present application further provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to implement the method for using the virtual prop provided in any of the above exemplary embodiments.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method for using the virtual prop provided in the above-mentioned optional implementation manner.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (8)
1. A method for using a virtual prop, the method comprising:
displaying a virtual environment picture, a first use control and a second use control, wherein the virtual environment picture comprises a main control virtual character, a first virtual prop is arranged on a first limb of the main control virtual character, and a second virtual prop is arranged on a second limb of the main control virtual character; the virtual environment picture is also displayed with an aiming control; the first virtual prop is a traction prop and comprises a transmitting device, a transmitting part and a traction part, wherein the transmitting device is fixed on the first limb of the main control virtual role, the traction part is connected with the transmitting device, and the transmitting part is connected with the traction part;
in response to receiving the triggering operation on the first use control, controlling the main control virtual character to use the transmitting device and transmitting the transmitting component to the aiming direction of the aiming control;
responding to the target position shot by the shooting component and fixing, and controlling the traction component to pull the master control virtual role to move to the target position;
in response to the existence of an obstacle on a moving path of the master virtual character to the target position, periodically emitting a collision box in a target direction pointing to the target position, starting from the position of the master virtual character, the collision box being used for detecting a collision with a three-dimensional virtual model in a virtual environment;
in response to the collision of the collision box with the three-dimensional virtual model of the obstacle, acquiring a collision point of the collision;
taking the collision point as a starting point, making a vector perpendicular to the target direction, and determining the vector as the avoiding direction of the main control virtual role;
controlling the main control virtual role to avoid the barrier and continuously move to the target position according to the avoiding direction;
determining the distance that the main control virtual role needs to be avoided according to the position of the collision point on the collision box, or controlling the main control virtual role to move for a fixed distance along the avoiding direction;
controlling the master virtual role to use the second virtual item in response to receiving a trigger operation on the second use control in the use process of the first virtual item;
after the control unit controls the traction unit to pull the master virtual character to move to the target position in response to the ejection unit ejecting the target position and fixing, the method further includes:
acquiring a first visual angle direction of the master control virtual character in the process of moving to the target position;
in response to that a first turning included angle between the first visual angle direction and a moving direction is larger than a first angle threshold, timing a first duration of time that the first turning included angle is larger than the first angle threshold, wherein the moving direction is a direction from the position of the main control virtual character to the target position;
in response to the first duration being greater than a first time threshold, controlling the traction component to stop traction of the master virtual character; and the number of the first and second groups,
acquiring a second visual angle direction of the main control virtual character, wherein the second visual angle direction is a visual angle at the moment when the main control virtual character starts to be dragged by the traction component;
in response to a second steering angle of the second perspective direction and the movement direction being greater than a second angle threshold, timing a second duration of the second steering angle being greater than the second angle threshold;
and in response to the second duration being greater than a second time threshold, controlling the traction component to stop traction of the master virtual character, wherein the second angle threshold is less than the first angle threshold or the second time threshold is less than the first time threshold.
2. The method of claim 1, wherein the controlling the master virtual character to eject the ejection member using the ejection device in an aiming direction of the aiming control in response to receiving the trigger operation on the first use control comprises:
in response to receiving a triggering operation on the first use control, shooting a first detection ray along the aiming direction of the aiming control with the position of the shooting device as a starting point;
and controlling the ejection component to eject to the target position along the aiming direction and fix in response to the first detection ray intersecting the three-dimensional virtual model in the virtual environment at the target position within the maximum range.
3. The method according to claim 1 or 2, wherein said controlling the master virtual character to use the second virtual item in response to receiving a trigger operation on the second usage control during the usage of the first virtual item comprises:
controlling the main control virtual role to use the second virtual prop in response to receiving the triggering operation on the second use control in the process of ejecting by the ejecting part;
or the like, or, alternatively,
and controlling the main control virtual character to use the second virtual prop in response to receiving the triggering operation on the second use control in the process of dragging the main control virtual character to move by the dragging part.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
and controlling the master virtual character to cross the traversable terrain in response to the master virtual character reaching the target position and the traversable terrain existing above the target position.
5. The method of claim 4, wherein controlling the master virtual character to traverse the traversable terrain in response to the master virtual character reaching the target location and existence of traversable terrain above the target location comprises:
responding to the main control virtual role to reach the target position, and sending a second detection ray to the lower part of the main control virtual role;
acquiring the ray length of the second detection ray in response to the second detection ray detecting the target surface;
responding to the fact that the ray length is larger than a first distance threshold value, and obtaining model information of a terrain model to which the target position belongs;
in response to the model information including the traversable terrain located above the target location and a vertical distance of the target location from the traversable terrain being less than a second distance threshold, controlling the master virtual character to traverse the traversable terrain.
6. An apparatus for using a virtual prop, the apparatus comprising:
the display module is used for displaying a virtual environment picture, a first using control and a second using control, wherein the virtual environment picture comprises a main control virtual role, a first virtual prop is arranged on a first limb of the main control virtual role, and a second virtual prop is arranged on a second limb of the main control virtual role; the virtual environment picture is also displayed with an aiming control; the first virtual prop comprises a launching device, a launching part and a traction part, wherein the launching device is fixed on the first limb of the main control virtual character, the traction part is connected with the launching device, and the launching part is connected with the traction part;
the interaction module is used for receiving the trigger operation on the first use control;
the control module is used for responding to the received trigger operation on the first use control, controlling the main control virtual character to use the transmitting device and transmitting the transmitting component to the aiming direction of the aiming control;
the control module is also used for responding to the ejection part to eject the target position and fix the target position, and controlling the traction part to pull the main control virtual role to move to the target position;
the control module is further configured to periodically eject a collision box in a target direction pointing to the target position, using the position of the main control virtual character as a starting point, in response to an obstacle existing on a moving path of the main control virtual character moving to the target position, where the collision box is configured to detect a collision with a three-dimensional virtual model in a virtual environment;
the control module is further used for responding to the collision of the collision box and the three-dimensional virtual model of the barrier, and acquiring collision points of the collision;
the control module is further configured to use the collision point as a starting point, make a vector perpendicular to the target direction, and determine the vector as a dodging direction of the master control virtual character;
the control module is further used for controlling the main control virtual character to avoid the barrier and continue to move to the target position according to the avoiding direction;
the interaction module is further used for receiving a trigger operation on the second use control;
the control module is further configured to control the master virtual character to use the second virtual item in response to receiving a trigger operation on the second use control in a use process of the first virtual item;
the device further comprises:
the acquisition module is used for acquiring a first visual angle direction in the process that the main control virtual role moves to the target position;
a timing module, configured to time a first duration that a first steering included angle between the first perspective direction and a moving direction is greater than a first angle threshold in response to the first steering included angle being greater than the first angle threshold, where the moving direction is a direction from the position of the master virtual character to the target position;
the control module is further configured to control the traction component to stop traction of the master virtual character in response to the first duration being greater than a first time threshold;
the acquisition module is further configured to acquire a second view direction of the master virtual character, where the second view direction is a view of the master virtual character at a time when the master virtual character starts to be dragged by the traction component;
the timing module is further configured to time a second duration that a second steering included angle of the second view angle direction and the moving direction is greater than a second angle threshold in response to the second steering included angle being greater than the second angle threshold;
the control module is further configured to control the traction component to stop traction of the master virtual character in response to the second duration being greater than a second time threshold, where the second angle threshold is smaller than the first angle threshold or the second time threshold is smaller than the first time threshold.
7. A computer device, the computer comprising: a processor and a memory, wherein at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to realize the use method of the virtual prop according to any one of claims 1 to 5.
8. A computer-readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is loaded and executed by a processor to implement the method for using the virtual item as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010948781.2A CN112044073B (en) | 2020-09-10 | 2020-09-10 | Using method, device, equipment and medium of virtual prop |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010948781.2A CN112044073B (en) | 2020-09-10 | 2020-09-10 | Using method, device, equipment and medium of virtual prop |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112044073A CN112044073A (en) | 2020-12-08 |
CN112044073B true CN112044073B (en) | 2022-09-20 |
Family
ID=73611101
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010948781.2A Active CN112044073B (en) | 2020-09-10 | 2020-09-10 | Using method, device, equipment and medium of virtual prop |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112044073B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116672712A (en) * | 2020-12-29 | 2023-09-01 | 苏州幻塔网络科技有限公司 | Prop control method and device, electronic equipment and storage medium |
CN113680053B (en) * | 2021-08-30 | 2023-06-13 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, terminal and storage medium |
CN118022327A (en) * | 2022-11-07 | 2024-05-14 | 腾讯科技(深圳)有限公司 | Control method, device and equipment for virtual props and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9319842B2 (en) * | 2011-06-27 | 2016-04-19 | At&T Intellectual Property I, L.P. | Mobile device configured point and shoot type weapon |
US9704350B1 (en) * | 2013-03-14 | 2017-07-11 | Harmonix Music Systems, Inc. | Musical combat game |
CN111068324B (en) * | 2019-11-26 | 2022-05-17 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
CN111097167B (en) * | 2019-11-28 | 2022-05-20 | 腾讯科技(深圳)有限公司 | Movement control method, server, electronic device, and storage medium |
CN111054065A (en) * | 2019-12-06 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Control method and device of virtual prop, storage medium and electronic device |
CN111111186B (en) * | 2019-12-26 | 2023-08-18 | 珠海金山数字网络科技有限公司 | Virtual character boarding method and device |
CN111265856A (en) * | 2020-01-17 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Control method and device of virtual operation object, storage medium and electronic device |
CN111589132A (en) * | 2020-04-26 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual item display method, computer equipment and storage medium |
-
2020
- 2020-09-10 CN CN202010948781.2A patent/CN112044073B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112044073A (en) | 2020-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110694261B (en) | Method, terminal and storage medium for controlling virtual object to attack | |
CN110448891B (en) | Method, device and storage medium for controlling virtual object to operate remote virtual prop | |
CN110917619B (en) | Interactive property control method, device, terminal and storage medium | |
CN112044073B (en) | Using method, device, equipment and medium of virtual prop | |
CN110613938B (en) | Method, terminal and storage medium for controlling virtual object to use virtual prop | |
CN111589150B (en) | Control method and device of virtual prop, electronic equipment and storage medium | |
CN110538459A (en) | Method, apparatus, device and medium for throwing virtual explosives in virtual environment | |
CN110465098B (en) | Method, device, equipment and medium for controlling virtual object to use virtual prop | |
CN112221141B (en) | Method and device for controlling virtual object to use virtual prop | |
CN111589149B (en) | Using method, device, equipment and storage medium of virtual prop | |
CN111714893A (en) | Method, device, terminal and storage medium for controlling virtual object to recover attribute value | |
CN111659119B (en) | Virtual object control method, device, equipment and storage medium | |
CN110507990B (en) | Interaction method, device, terminal and storage medium based on virtual aircraft | |
CN112316421B (en) | Equipment method, device, terminal and storage medium of virtual item | |
CN110876849B (en) | Virtual vehicle control method, device, equipment and storage medium | |
CN111389005B (en) | Virtual object control method, device, equipment and storage medium | |
CN112402964B (en) | Using method, device, equipment and storage medium of virtual prop | |
CN113041622A (en) | Virtual throwing object throwing method in virtual environment, terminal and storage medium | |
CN111659116A (en) | Virtual vehicle control method, device, equipment and medium | |
CN112138384A (en) | Using method, device, terminal and storage medium of virtual throwing prop | |
CN111760284A (en) | Virtual item control method, device, equipment and storage medium | |
CN111298441A (en) | Using method, device, equipment and storage medium of virtual prop | |
CN114130031A (en) | Using method, device, equipment, medium and program product of virtual prop | |
CN111249726B (en) | Operation method, device, equipment and readable medium of virtual prop in virtual environment | |
CN111389000A (en) | Using method, device, equipment and medium of virtual prop |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |