WO2021036577A1 - 控制虚拟对象的方法和相关装置 - Google Patents

控制虚拟对象的方法和相关装置 Download PDF

Info

Publication number
WO2021036577A1
WO2021036577A1 PCT/CN2020/103006 CN2020103006W WO2021036577A1 WO 2021036577 A1 WO2021036577 A1 WO 2021036577A1 CN 2020103006 W CN2020103006 W CN 2020103006W WO 2021036577 A1 WO2021036577 A1 WO 2021036577A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
target area
virtual object
virtual environment
area
Prior art date
Application number
PCT/CN2020/103006
Other languages
English (en)
French (fr)
Inventor
刘智洪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2021555308A priority Critical patent/JP2022525172A/ja
Priority to SG11202109543UA priority patent/SG11202109543UA/en
Priority to KR1020217029447A priority patent/KR102619439B1/ko
Publication of WO2021036577A1 publication Critical patent/WO2021036577A1/zh
Priority to US17/459,037 priority patent/US20210387087A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/219Input arrangements for video game devices characterised by their sensors, purposes or types for aiming at specific areas on the display, e.g. light-guns
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • This application relates to the field of computers, in particular to the control of virtual objects.
  • users can manipulate virtual objects in the virtual environment to perform activities such as walking, running, climbing, shooting, and fighting.
  • activities such as walking, running, climbing, shooting, and fighting.
  • the user wants to control the virtual object to perform different activities the corresponding user interface UI control on the user interface needs to be triggered.
  • UI controls corresponding to various activities performed by virtual objects on the user interface There are UI controls corresponding to various activities performed by virtual objects on the user interface, and the UI controls corresponding to various activities are located in different positions of the user interface.
  • the direction button is located on the left side of the user interface
  • the running button is located on the right side of the user interface.
  • the embodiments of the present application provide a method and related device for controlling virtual objects, which can solve the problem that when controlling virtual objects to perform various activities in the related art, the UI controls corresponding to the activities need to be triggered, and the user cannot quickly control the virtual objects to perform various activities. problem.
  • the technical solution is as follows:
  • a method of controlling a virtual object including:
  • the user interface including a virtual environment screen and an interactive panel area
  • the virtual environment screen is a screen for observing the virtual environment from the perspective of a virtual object
  • an apparatus for controlling a virtual object including:
  • a display module for displaying a user interface including a virtual environment screen and an interactive panel area, the virtual environment screen is a screen for observing the virtual environment from the perspective of a virtual object;
  • a receiving module configured to receive a shortcut operation on a target area on the user interface, the target area including an area belonging to the virtual environment screen and not belonging to the interactive panel area;
  • the control module is configured to control the virtual object to perform corresponding activities in the virtual environment according to the shortcut operation.
  • a computer device comprising: a processor and a memory, the memory stores at least one instruction, at least one program, code set, or instruction set, the at least one The instructions, the at least one program, the code set or the instruction set are loaded and executed by the processor to implement the method for controlling a virtual object as described in the above aspect.
  • a storage medium is provided, the storage medium is used to store a computer program, and the computer program is used to execute the method for controlling a virtual object described in the above aspect.
  • a computer program product including instructions, which when run on a computer, causes the computer to execute the method for controlling a virtual object described in the above aspect.
  • the user can control the virtual object to perform the corresponding activity by performing shortcut operations in the target area, without the user triggering the UI control corresponding to the activity, and without the user memorizing the function and location of the UI control. Realize the control of virtual objects to perform corresponding activities in the virtual environment according to shortcut operations.
  • Fig. 1 is a schematic diagram of an interface for controlling a virtual object to open a sight in related technologies according to an exemplary embodiment of the present application;
  • Fig. 2 is a schematic diagram of an interface for controlling a virtual object to open a sight provided by an exemplary embodiment of the present application
  • Fig. 3 is a block diagram of an implementation environment provided by an exemplary embodiment of the present application.
  • Fig. 4 is a flowchart of a method for controlling a virtual object provided by an exemplary embodiment of the present application
  • Fig. 5 is a schematic diagram of a camera model corresponding to the perspective of a virtual object provided by an exemplary embodiment of the present application
  • Fig. 6 is a schematic diagram of an interface for establishing a rectangular coordinate system on a user interface provided by an exemplary embodiment of the present application;
  • Fig. 7 is a schematic diagram of an interface for controlling a virtual object to get up provided by an exemplary embodiment of the present application
  • FIG. 8 is a schematic diagram of an interface for controlling virtual objects to start virtual props in a related technology provided by an exemplary embodiment of the present application;
  • Fig. 9 is a schematic diagram of an interface for controlling a virtual object to start a virtual prop according to an exemplary embodiment of the present application.
  • Fig. 10 is a schematic diagram of an interface for controlling virtual objects to throw virtual props according to an exemplary embodiment of the present application
  • Fig. 11 is a flowchart of a method for controlling a virtual object to open a sight provided by an exemplary embodiment of the present application
  • Fig. 12 is a flowchart of a method for controlling a virtual object to close a sight provided by an exemplary embodiment of the present application
  • FIG. 13 is a flowchart of a method for controlling a virtual object to fire according to an exemplary embodiment of the present application
  • FIG. 14 is a flowchart of a method for controlling a virtual object to get up and throw virtual props according to an exemplary embodiment of the present application
  • Fig. 15 is a block diagram of a device for controlling virtual objects provided by an exemplary embodiment of the present application.
  • FIG. 16 is a schematic diagram of an apparatus structure of a computer device provided by an exemplary embodiment of the present application.
  • Virtual environment It is the virtual environment displayed (or provided) when the application is running on the terminal.
  • the virtual environment may be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application.
  • the virtual environment is a three-dimensional virtual environment as an example.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional model created based on animation skeletal technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • Virtual props refer to the props that a virtual object can use in a virtual environment.
  • Virtual props include: virtual weapons used by the virtual objects and accessories corresponding to the virtual weapons, virtual food, virtual medicines, clothes, accessories, etc.
  • the props are virtual weapons as an example.
  • Virtual weapons include: pistols, rifles, sniper rifles and other general firearms, as well as bows and arrows, crossbows, spears, daggers, swords, knives, axes, bombs, missiles, etc.
  • First-person Shooting refers to a shooting game that users can play from a first-person perspective.
  • the screen of the virtual environment in the game is a screen that observes the virtual environment from the perspective of the first virtual object.
  • at least two virtual objects play a single-game battle mode in the virtual environment.
  • the virtual objects avoid attacks initiated by other virtual objects and the dangers in the virtual environment (such as gas circles, swamps, etc.) to achieve the virtual environment.
  • the purpose of survival in the environment when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment ends, and the virtual object that finally survives in the virtual environment is the winner.
  • the battle starts with the moment when the first client joins the battle, and the moment when the last client exits the battle as the end time.
  • Each client can control one or more virtual objects in the virtual environment.
  • the competition mode of the battle may include a single-player battle mode, a two-player team battle mode, or a multi-player team battle mode. The embodiment of the present application does not limit the battle mode.
  • Trigger control refers to a user interface UI (User Interface) control, any visual control or element that can be seen on the user interface of the application, such as pictures, input boxes, text boxes, buttons, labels and other controls. Some UI controls respond to user operations. For example, if the user triggers an attack control corresponding to a pistol, the virtual object is controlled to use the pistol to attack.
  • UI User Interface
  • the "equipment, carrying or assembling" virtual props in this application refer to the virtual props owned by the virtual object.
  • the virtual object has a backpack, there is a backpack compartment in the backpack, the virtual prop is stored in the virtual object's backpack, or the virtual object is in use Virtual props.
  • the methods provided in this application can be applied to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games (FPS), and multiplayer online battle Arena Games (MOBA) ), etc.
  • the following embodiments are exemplified in the application of the game.
  • Games based on virtual environments are often composed of one or more maps of the game world.
  • the virtual environment in the game simulates the scene of the real world. Users can manipulate virtual objects in the game to walk, run, jump, shoot, and fight in the virtual environment. Actions such as driving, throwing, standing up, lying prone, etc., are highly interactive, and multiple users can team up for competitive games online.
  • UI controls are set on the user interface. The user controls the virtual objects to perform different activities in the virtual environment by triggering different UI controls. For example, the user triggers the UI controls (direction buttons) corresponding to the mobile function.
  • the virtual object is controlled to move in the virtual environment; the virtual object is using a pistol, and the user touches the UI control (fire button) corresponding to the fire function (using a virtual weapon to attack) to control the virtual object to use the pistol to attack in the virtual environment.
  • the UI controls are distributed in various areas on the user interface, and the user needs to memorize the functions of each UI control and the area in which it is located, so that the virtual objects can be quickly controlled to perform corresponding activities.
  • FIG. 1 shows a schematic diagram of an interface for controlling a virtual object to open a sight in a related technology provided by an exemplary embodiment of the present application.
  • the sighting control 101, the attacking control 103, and the movement control 104 are displayed on the scope ready to open interface 10.
  • the sighting control 101 is located on the right side of the scope ready to open interface 10.
  • the area is located on the lower side of the corresponding minimap of the virtual environment.
  • the number of aiming controls 101 on the interface 10 is one. In some embodiments, the number of aiming controls 101 is two or more. This application does not limit the number and positions of aiming controls 101.
  • the user can customize the position and number of the aiming control 101.
  • the user sets the number of aiming controls 101 to three, which are located in the right area of the interface 10 to be opened.
  • the user controls the virtual object to open the sight 105 by triggering the sighting control 101.
  • the attack controls 103 are respectively located in the left area and the right area of the scope interface 10 to be opened, and the attack controls 103 located in the left area of the interface 10 are located at the upper left of the mobile control 104.
  • the number of attack controls 103 on the interface 10 is two.
  • the number of attack controls 103 may also be one or three or more.
  • the user can customize the position and number of the attack controls 103.
  • the number of attack controls 103 set by the user is three, which are respectively located in the left area, right area and middle area of the interface 10 to be opened.
  • the movement control 104 is located in the left area of the scope interface 10 to be opened, and is located at the lower right of the attack control 103.
  • the user can control the movement of the virtual object in the virtual environment by triggering the movement control 104.
  • the user interface is shown in FIG. 1(b), and the closing control 102 is also displayed on the opening scope interface 11.
  • the close control 102 is located where the original aiming control 101 is located, that is, the display style of the aiming control 101 is switched to the display style of the close control 102.
  • the user controls the virtual object to open the scope 105 corresponding to the firearm used.
  • the firearm changes from being displayed in the left area of the user interface to being displayed in the center of the user interface.
  • the scope 105 is displayed at the front of the opening scope interface 11. In the center, it is aimed at. If the user triggers the close control 102, the sight 105 is closed, and the sight interface 10 is opened to change to a ready-to-open sight interface 10 as shown in FIG. 1(a).
  • the corresponding UI control needs to be triggered.
  • the user needs to trigger the aiming control 101, and when the virtual object is controlled When stopping this action, the user also needs to trigger the corresponding close control 102.
  • the steps are more cumbersome, and the user needs to remember the corresponding function and location of each UI control. The user may forget the UI control corresponding to the action and the corresponding position of the UI control. And delay the virtual object to make the corresponding action.
  • FIG. 2 shows a schematic diagram of an interface for controlling a virtual object provided by an exemplary embodiment of the present application.
  • the virtual prop used by the virtual object is a sniper rifle
  • the accessory corresponding to the virtual prop is the sight 113.
  • the aiming control 101 and the closing control 102 are displayed on the user interface, and the scope opening interface 110 in FIG. 2 does not display any control for opening the scope 113.
  • the user can control the virtual object to quickly open the sight 113 through a shortcut operation.
  • the shortcut operation is a double-click operation.
  • a target area 131 is set on the user interface, and the target area 131 includes a virtual environment screen and does not belong to an area of the interactive panel area 112.
  • the target area 131 is the right area in the user interface, and the right area is an area that does not belong to the interactive panel area 112.
  • the user interface displays the opening scope interface 110, and the scope 113 of the virtual prop (sniper rifle) 111 used by the virtual object is located in the center of the user interface, in an aiming state.
  • the sight 113 has been opened, which is the same as the effect of the user opening the sight 105 by clicking on the sight control 101 shown in (b) of FIG. 1.
  • the user can use a shortcut operation to close the scope 113.
  • the shortcut operation for closing the sight 113 is a double-click operation.
  • the target area 131 of the closed sight 113 and the opened sight 113 are the same area or different areas.
  • the virtual object is controlled to close the scope 113.
  • the virtual prop (sniper rifle) used by the virtual object is changed to the state before the scope 113 is opened.
  • the user can set the range of the target area.
  • the user sets the center of the user interface as the center and a circular area with a radius of 10 units as the target area.
  • the shortcut operation may be at least one of a single-click operation, a double-click operation, a sliding operation, a drag operation, a long-press operation, a double-click and a long-press operation, and a two-finger sliding operation.
  • the user can set the corresponding activity control when the virtual object is active.
  • the control corresponding to the activity is cancelled in the user interface, or the user sets the position of the control corresponding to the activity And quantity etc. It can be understood that the user controls the virtual object to perform corresponding activities by performing different shortcut operations in the target area 131.
  • Fig. 3 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
  • the first terminal 120 installs and runs an application program supporting the virtual environment.
  • the application can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games.
  • the first terminal 120 is a terminal used by the first user.
  • the first user uses the first terminal 120 to control virtual objects located in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, riding , Jumping, driving, shooting, throwing, using at least one of virtual props.
  • the virtual object is a virtual character, such as a simulated character object or an animation character object.
  • the first terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 140 includes a processor 144 and a memory 142, and the memory 142 further includes a display module 1421, a control module 1422, and a receiving module 1423.
  • the server 140 is used to provide background services for applications supporting the three-dimensional virtual environment.
  • the server 140 is responsible for the main calculation work, and the first terminal 120 and the second terminal 160 are responsible for the secondary calculation work; or, the server 140 is responsible for the secondary calculation work, and the first terminal 120 and the second terminal 160 are responsible for the main calculation work;
  • the server 140, the first terminal 120, and the second terminal 160 adopt a distributed computing architecture to perform collaborative computing.
  • the second terminal 160 installs and runs an application program supporting the virtual environment.
  • the application can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games.
  • the second terminal 160 is a terminal used by the second user.
  • the second user uses the second terminal 160 to control virtual objects located in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, riding , Jumping, driving, shooting, throwing, using at least one of virtual props.
  • the virtual object is a virtual character, such as a simulated character object or an animation character object.
  • the virtual object controlled by the first user and the virtual object controlled by the second user are in the same virtual environment.
  • the virtual object controlled by the first user and the virtual object controlled by the second user may belong to the same team, the same organization, have a friend relationship, or have temporary communication permissions.
  • the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different control system platforms.
  • the first terminal 120 may generally refer to one of multiple terminals
  • the second terminal 160 may generally refer to one of multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as examples.
  • the device types of the first terminal 120 and the second terminal 160 are the same or different.
  • the device types include smart phones, tablet computers, e-book readers, MP3 players, MP4 players, laptop computers and desktop computers. At least one.
  • the terminal includes a smart phone as an example.
  • the number of the above-mentioned terminals may be more or less. For example, there may be only one terminal, or there may be dozens or hundreds of terminals, or a larger number.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • FIG. 4 shows a flowchart of a method for controlling a virtual object provided by an exemplary embodiment of the present application.
  • the method can be applied to the first terminal 120 or the second terminal 160 in the computer system as shown in FIG. In other terminals in the computer system.
  • the method includes the following steps:
  • Step 401 Display a user interface.
  • the user interface includes a virtual environment screen and an interactive panel area.
  • the virtual environment screen is a screen for observing the virtual environment from the perspective of a virtual object.
  • the virtual environment screen is a screen for observing the virtual environment from the perspective of the virtual object.
  • the angle of view refers to the viewing angle when the virtual object is observed in the virtual environment from the first person perspective or the third person perspective.
  • the angle of view is the angle when the virtual object is observed through the camera model in the virtual environment.
  • the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model follows the position of the virtual object in the virtual environment and changes at the same time, and the camera The model is always within the preset distance range of the virtual object in the virtual environment.
  • the relative position of the camera model and the virtual object does not change.
  • the camera model refers to the three-dimensional model located around the virtual object in the virtual environment.
  • the camera model is located near the head of the virtual object or the head of the virtual object;
  • the camera The model can be located behind the virtual object and bound with the virtual object, or can be located at any position with a preset distance from the virtual object.
  • the camera model can be used to observe the virtual object in the virtual environment from different angles, optional Specifically, when the third-person perspective is the over-the-shoulder perspective of the first person, the camera model is located behind the virtual object (such as the head and shoulders of the virtual character).
  • the perspective includes other perspectives, such as a top-view perspective; when a top-down perspective is adopted, the camera model can be located above the head of the virtual object, and the top-view perspective is viewed from the air Angle of view to observe the virtual environment.
  • the camera model is not actually displayed in the virtual environment, that is, the camera model is not displayed in the virtual environment displayed on the user interface.
  • the camera model is located at any position at a preset distance from the virtual object as an example.
  • a virtual object corresponds to a camera model, and the camera model can be rotated with the virtual object as the center of rotation, such as: virtual object Any point of is the center of rotation to rotate the camera model.
  • the camera model not only rotates in angle, but also shifts in displacement.
  • the distance between the camera model and the center of rotation remains unchanged. That is, the camera model is rotated on the surface of the sphere with the center of rotation as the center of the sphere, where any point of the virtual object can be the head, torso, or any point around the virtual object.
  • the center of the angle of view of the camera model points to the direction in which the point on the spherical surface of the camera model points to the center of the sphere.
  • the camera model can also observe the virtual object at a preset angle in different directions of the virtual object.
  • a point is determined in the virtual object 11 as the rotation center 12, and the camera model rotates around the rotation center 12.
  • the camera model is configured with an initial position, and the initial position is the virtual object The position above the back (for example, the position behind the brain).
  • the initial position is position 13, and when the camera model rotates to position 14 or position 15, the viewing angle direction of the camera model changes with the rotation of the camera model.
  • the virtual environment displayed on the virtual environment screen includes at least one element of mountains, flatlands, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
  • the user interface includes an interactive panel area 112, as shown in FIG. 2.
  • the interactive panel area 112 is provided with UI controls, message sending controls, voice controls, expression controls, setting controls, etc., which control virtual objects to perform activities.
  • the above controls are used for the user to control the virtual objects to perform corresponding activities in the virtual environment, or, Send messages to teammates in the same team (including: text messages, voice messages, and emoji messages), or set the motion attributes of virtual objects in the virtual environment (for example, running speed) or set virtual weapons (For example, the sensitivity of the firearm, attack range, lethality, etc.), or display the position of the virtual object controlled by the user in the virtual environment (schematically, a thumbnail map of the virtual environment is displayed on the user interface).
  • the user can instantly know the current state of the virtual object through the interactive panel area 112, and can control the virtual object to perform corresponding activities through the interactive panel area 112 at any time.
  • the shape of the interactive panel area is rectangular or circular, or the shape of the interactive panel area corresponds to the shape of the UI controls on the user interface.
  • the shape of the interactive panel area is not limited in this application.
  • Step 402 Receive a shortcut operation on a target area on the user interface, where the target area includes an area that belongs to the virtual environment screen and does not belong to the interactive panel area.
  • the shortcut operation includes at least one of a double-click operation, a double-click and long-press operation, a two-finger horizontal sliding operation, and a two-finger vertical sliding operation.
  • the target area includes the following forms:
  • the target area is the area corresponding to the A area.
  • the target area includes an area that belongs to the virtual environment screen and does not belong to the interactive panel area.
  • the user can perform shortcut operations in the area where the UI controls and the minimap are not included, for example, in the area between the mobile control and the attack control, which belongs to the virtual environment screen and does not include interaction The area of the panel area.
  • the target area is the area corresponding to the A area and the B area.
  • the target area includes the area corresponding to the virtual environment picture and the interactive panel area.
  • the user can perform shortcut operations on the virtual environment screen and the corresponding area of the interactive panel area at the same time.
  • the user can perform shortcut operations on the area where the UI controls or the minimap are located, that is, the user can Perform shortcut operations anywhere on the user interface 110.
  • the target area is a part of the A area and the area corresponding to the B area.
  • the target area includes the area corresponding to the part of the virtual environment screen and the area corresponding to the interactive panel area.
  • the user can perform shortcut operations in the right area and the interactive panel area of the virtual environment screen, that is, the user can perform shortcut operations in the right area of the user interface 110 and the left area of the user interface 110.
  • the area corresponding to the part of the virtual environment picture may be any one of the left area, the right area, the upper area, and the lower area of the virtual environment picture.
  • the target area is a part of the B area and the area corresponding to the A area.
  • the target area is the area corresponding to the virtual environment screen and the area corresponding to part of the interactive panel area.
  • the user can perform shortcut operations in the area corresponding to the virtual environment screen and the interactive panel area on the right, that is, the user Shortcut operations can be performed in the area corresponding to the left interactive panel area that does not include the user interface 110.
  • part of the interactive panel area may be any one side of the left interactive panel area, the right interactive panel area, the upper interactive panel area, and the lower interactive panel area.
  • the target area is an area corresponding to a partial area of the A area and a partial area of the B area.
  • the target area is the area corresponding to the part of the virtual environment screen and the area corresponding to the part of the interactive panel area.
  • the user can perform shortcut operations on the left virtual environment screen and the area corresponding to the left interactive panel area. That is, the user can perform shortcut operations in the left area of the user interface 110.
  • the user can perform shortcut operations in the area corresponding to the right virtual environment screen and the left interactive panel area, that is, the user can perform shortcut operations in the area excluding the area corresponding to the right interactive panel area and the area corresponding to the left virtual environment screen
  • Shortcut operations users can perform shortcut operations on the mobile controls and attack controls, as well as on the corresponding area of the virtual environment screen on the right.
  • the interactive panel area may not be displayed on the user interface, that is, the interactive panel area may be hidden.
  • Fig. 6 shows an interface schematic diagram of a target area on a user interface provided by an exemplary embodiment of the present application.
  • a rectangular coordinate system is established with the center of the user interface 130 as the origin, and the target area 131 is set in the right area of the user interface 130, between the first quadrant and the fourth quadrant in the rectangular coordinate system, the target area 131
  • the shape is oval.
  • the user performs a double-click operation in the target area 131 to control the virtual object to open the accessory corresponding to the virtual item.
  • the target area 131 may also be any area other than the non-panel interaction area on the user interface 130.
  • the upper area of the user interface 130 refers to the area corresponding to the left edge of the user interface 130 to the right UI controls; or the target area 131 is the area corresponding to the location of the virtual item used by the virtual object
  • the target area 131 is an area corresponding to a virtual weapon (sniper rifle) used by the virtual object.
  • the user performs different shortcut operations in the target area 131 to control the virtual object to perform different activities. For example, the user double-clicks in the target area 131 to control the virtual object to open the accessory corresponding to the virtual item (for example, the virtual object). Scope corresponding to the sniper rifle used).
  • Step 403 Control the virtual object to perform corresponding activities in the virtual environment according to the shortcut operation.
  • the virtual object when the body posture of the virtual object meets the first condition, the virtual object is controlled to adjust the body posture in the virtual environment according to the shortcut operation.
  • the first condition includes that the body posture of the virtual object is a squat state
  • the shortcut operation includes a two-finger vertical sliding operation.
  • Virtual objects can perform actions such as running, jumping, climbing, getting up, and crawling in the virtual environment.
  • the body posture of the virtual objects in the virtual environment can be lying down, squatting, standing, sitting, lying, kneeling, etc., which is indicative.
  • FIG. 7 shows a schematic diagram of an interface for controlling the virtual object to get up provided by an exemplary embodiment of the present application.
  • the user interface 150 displays the upward perspective of the virtual object, that is, the screen corresponding to the sky.
  • the user's finger slides upwards on the user interface 150 at the same time (as shown by the arrow in FIG. 7), and the virtual environment screen in the user interface 150 will change as the user slides, and the sky in the user interface 150 corresponds to The proportion of the area occupied by the screen will be reduced.
  • the virtual object's perspective is head-up.
  • the user interface 150 includes the ground and scenery of the virtual environment. The user controls the virtual object to get up in the virtual environment by two-finger vertical sliding operation.
  • the user controls the virtual object to jump or climb in the virtual environment through a two-finger vertical sliding operation, or the user controls the virtual object to get up in the virtual environment through a double-click operation or other shortcut operations (when the virtual object is in the virtual environment).
  • this application is not limited.
  • the virtual object when the use state of the virtual prop satisfies the second condition, the virtual object is controlled to open the accessory corresponding to the virtual prop in the virtual environment according to the shortcut operation.
  • the second condition includes that the virtual item is in an automatically activated state, and the shortcut operation also includes a double-click operation.
  • the automatic start state means that the virtual item can be automatically started without triggering operation.
  • the submachine gun will automatically attack or fire when the scope 113 is opened.
  • the virtual weapon 111 used by the user to control the virtual object is a submachine gun.
  • the user When the virtual item is in the automatic activation state, the user performs a double-click operation in the target area to control the virtual object to turn on the sight 113 corresponding to the submachine gun in the virtual environment.
  • the submachine gun is displayed in the center of the user interface 110, and the scope 113 is also displayed in the center of the user interface 110.
  • the accessory corresponding to the virtual item may also be a magazine.
  • the virtual weapon is a firearm weapon as an example.
  • the user double-clicks in the target area to control the virtual object to install magazines for the firearm virtual weapon in the virtual environment .
  • the user can control the virtual object to install the accessory of the virtual prop through a double-click operation, or the user can control the virtual object to open the accessory corresponding to the virtual prop in the virtual environment through a double-click and long press operation or other shortcut operations. Not limited.
  • the virtual object is controlled according to the shortcut operation to start the virtual prop in the virtual environment.
  • the third condition includes that the virtual item is manually activated, and the shortcut operation also includes a double-click and long-press operation.
  • the manual start state means that the virtual item can only be started after the user's trigger operation.
  • the user needs to trigger the attack control to control the virtual object to use the virtual weapon to attack.
  • Fig. 8 shows a schematic diagram of an interface for controlling a virtual object to attack in a related technology provided by an exemplary embodiment of the present application.
  • Two attack controls 103 are displayed in the attack interface 12, which are located in the left area and the right area of the user interface, respectively.
  • the virtual prop used by the user to control the virtual object is a submachine gun, and the user needs to trigger at least one of the two attack controls 103 to control the virtual object to attack (that is, the submachine gun fires).
  • FIG. 9 shows a schematic diagram of an interface for controlling a virtual object to activate a virtual prop according to an exemplary embodiment of the present application.
  • the target area 131 and the attack control 114 are displayed in the attack interface 170.
  • the user can control the virtual object to start the virtual item by double-clicking and long-pressing in the target area 131.
  • the virtual item is a sniper rifle
  • the target area 131 is an oval area on the right side of the attack interface 170 as an example.
  • the virtual object is controlled to use a sniper rifle to attack (that is, fire).
  • the user can also control the virtual object to use a sniper rifle to attack by triggering the attack control 114.
  • the user can set the number of attack controls 114, for example, the number of attack controls 114 on the user interface is two, or the application corresponding to the game sets the number of attack controls 114 by default, or the background server sets the number of attack controls 114 according to the user The usage habits and history records intelligently set the number of attack controls 114, which is not limited in this application.
  • the user can also adjust the position of the attack control 114 on the user interface, and adjust the position of the attack control 114 on the user interface in real time according to the actual game situation, so as to avoid the position of the attack control 114 from causing visual disturbance to the user.
  • the user can control the virtual object to run continuously through a double-click and long-press operation, or the user can control the virtual object to start virtual items in the virtual environment through a double-click operation or other shortcut operations, which is not limited in this application.
  • the virtual object when the use state of the virtual prop satisfies the fourth condition, the virtual object is controlled to throw the virtual prop in the virtual environment according to the shortcut operation.
  • the fourth condition includes that the virtual object has virtual props, and the shortcut operation also includes a two-finger horizontal sliding operation.
  • the fact that the virtual object possesses a virtual item means that the virtual object is equipped with the virtual item, and the virtual item is located in the backpack compartment of the virtual object, or is being used by the virtual object.
  • the virtual prop is an example of a bomb.
  • FIG. 10 shows a schematic diagram of an interface for controlling a virtual object to throw a virtual prop according to an exemplary embodiment of the present application, and a bomb 115 owned by the virtual object is displayed in the throwing interface 170.
  • the user's finger slides to the right on the target area at the same time, and when the user stops sliding, the virtual object is controlled to throw the bomb it owns.
  • the user can also control the virtual object to throw the bomb by triggering the weapon control 115.
  • the user controls the virtual object to get up and throw a bomb through a two-finger horizontal sliding operation.
  • the user controls the virtual object to pick up the virtual props in the virtual environment through a two-finger vertical sliding operation, or removes the virtual prop assembled by the virtual object, or the user uses two-finger horizontal sliding operation control or other shortcut operations
  • Controlling virtual objects to throw virtual props in the virtual environment is not limited in this application.
  • the user can control the virtual object to perform corresponding activities by performing shortcut operations in the target area, without requiring the user to trigger the corresponding UI control for the activity, and without the user having to memorize the functions of the UI control And position, you can control the virtual object to perform corresponding activities in the virtual environment according to the shortcut operation.
  • Fig. 11 shows a flowchart of a method for controlling a virtual object to open a sight provided by an exemplary embodiment of the present application. This method can be applied to the first terminal 120 or the second terminal 160 in the computer system as shown in FIG. 3 or other terminals in the computer system. The method includes the following steps:
  • Step 1101 select the automatic fire state, and receive the double-click operation.
  • the user selects the sniper rifle to be in the automatic firing state, or the user sets the sniper rifle to the automatic firing state, that is, when the virtual object is equipped with the sniper rifle, the sniper rifle is Automatic fire state, no need to set.
  • Step 1102 Determine whether it is a double-click operation.
  • the quick operation to open the sight is the double-click operation as an example for description.
  • the application corresponding to the game determines whether the operation is a double-click operation.
  • the application obtains the time of the user's first click operation and the time of the second click operation, and when the time interval between the first click operation and the second click operation is less than the time interval threshold, the operation is judged It is a double-click operation.
  • the time interval threshold is 500 milliseconds. When the time interval between the first click operation and the second click operation is less than 500 milliseconds, it is determined that the received operation is a double-click operation.
  • Step 1103 Determine whether the double-click operation is in the rotation area.
  • the target area is also named as a rotating area, and the name of the target area is not limited in this application.
  • the user interface is a rectangular area with a length of 100 units and a width of 50 units.
  • the range of the rotation area is a rectangular area formed by a length greater than 50 unit length and less than 100 unit length, and a width of 50 unit length, that is, the center of the user interface is the right side of the boundary area. As shown in Figure 6, a rectangular coordinate system is established with the center of the user interface as the origin.
  • the area corresponding to the first and fourth quadrants of the rectangular coordinate system is the rotation area, and the target area 131 is in the first and fourth quadrants. Within the range of the corresponding area, the user can perform shortcut operations in the target area 131 to control the virtual object.
  • Step 1104 perform a mirror opening operation.
  • the game corresponding application controls the virtual object to open the sniper rifle scope.
  • Fig. 12 shows a flowchart of a method for controlling a virtual object to close a sight provided by an exemplary embodiment of the present application. This method can be applied to the first terminal 120 or the second terminal 160 in the computer system as shown in FIG. 3 or other terminals in the computer system. The method includes the following steps:
  • Step 1201 Receive the mirror-off operation.
  • the virtual prop as a sniper rifle as an example
  • a quick operation is performed in the rotating area on the user interface, and the application closes the scope of the sniper rifle according to the user's quick operation.
  • Step 1202 Determine whether it is a double-click operation.
  • the time interval threshold is 900 milliseconds, and the time interval between the user's first click operation and the second click operation is 500 milliseconds. If it is less than the time interval threshold, the two click operations are judged as double-click operations. Optionally, if the time interval between the first click operation and the second click operation is 1 second, the application records this click operation as the first click event, and the user needs to perform two click operations again. According to the two click operations of the second click event, the time interval between the two click operations is recalculated.
  • Step 1203 Determine whether the double-click operation is in the rotation area.
  • the length of the user interface is 100 units in length, and the width is 50 units in length.
  • the range of the rotation area is a rectangular area formed by a length between 20 unit length and 30 unit length, and a width of 45 unit length.
  • Step 1204 perform a mirror-off operation.
  • the user when the scope of the sniper rifle is opened, the user performs a double-click operation in the rotating area, and the virtual object is controlled to close the scope of the sniper rifle.
  • Fig. 13 shows a flowchart of a method for controlling a virtual object to fire according to an exemplary embodiment of the present application. This method can be applied to the first terminal 120 or the second terminal 160 in the computer system as shown in FIG. 3 or other terminals in the computer system. The method includes the following steps:
  • Step 1301 select manual firing.
  • the user can set the use mode of the virtual item to be the manual activation mode, or when the user selects the virtual item, the virtual item is already in the manual activation mode (the default setting of the virtual item), and the manual activation mode means that the user needs to trigger Only after the corresponding UI controls or corresponding operations are performed can the virtual objects be controlled to start the virtual props.
  • the virtual item as a submachine gun as an example
  • the user selects the firing mode of the submachine gun as manual firing, and when the user triggers the firing or attack control, the virtual object is controlled to use the submachine gun to attack (that is, the submachine gun fires bullets).
  • Step 1302 Determine whether it is a double-click operation.
  • the time interval between the user's first click operation and the second click operation is 300 milliseconds, and is less than the time interval threshold of 500 milliseconds, then it is determined that the user's first click operation and the second click operation are double-clicks operating.
  • Step 1303 Determine whether the double-click operation is in the rotation area.
  • the length of the user interface is 100 units in length, and the width is 50 units in length.
  • the rotation area is a rectangular area formed by a length greater than 50 unit length and less than 100 unit length, and a width of 50 unit length, that is, the right area with the center of the user interface as the boundary. The user performs a double-click operation in the rotating area.
  • Step 1304 Determine whether the long press operation is received.
  • the user After receiving the double-tap operation on the target area, the user needs to perform a pressing operation in the target area (that is, the rotation area).
  • the duration threshold is 200 milliseconds
  • the duration of the user's pressing operation is 300 milliseconds, which is greater than the duration threshold, and it is determined that the pressing operation performed by the user is a long-press operation.
  • Step 1305, perform a firing operation.
  • the virtual object After the user performs a double-click operation and a long-press operation in the target area, the virtual object is controlled to perform a fire operation according to the double-click and long-press operation.
  • the user performs a double-click and long-press operation in the target area, and controls the virtual object to use a submachine gun to fire.
  • Step 1306 It is judged whether to stop the long press operation.
  • the submachine gun adjusts the firing time according to the user's long press operation time, for example, in the fire state, the user's long press operation lasts for a long time If it is 3 seconds, the firing time of the submachine gun is 3 seconds.
  • Step 1307 Perform a ceasefire operation.
  • the virtual object is controlled to close the virtual prop.
  • the double-click and long-press operations are also named as double-click and long-press operations, and the name of the shortcut operations is not limited in this application. It is understandable that the double-click and long-press operation performed by the user in the target area is the double-click operation first, and then the long-press operation. When the virtual item is activated, the duration of the long-press operation is the duration of the virtual item being turned on. When the long press operation is stopped, the virtual item will be closed.
  • FIG. 14 shows a flowchart of a method for controlling a virtual object by a two-finger sliding operation provided by an exemplary embodiment of the present application. This method can be applied to the first terminal 120 or the second terminal 160 in the computer system as shown in FIG. 3 or other terminals in the computer system. The method includes the following steps:
  • Step 1401 Receive a two-finger swipe operation.
  • the user performs a two-finger sliding operation on the user interface.
  • the two-finger sliding operation includes a two-finger horizontal sliding operation and a two-finger vertical sliding operation.
  • Step 1402 Determine whether two fingers are on the user interface at the same time.
  • the application program determines whether the double touch points corresponding to the two fingers of the user are on the user interface at the same time. Optionally, if the double touch points are not on the user interface at the same time, it can be judged as a double-click operation.
  • Step 1403 Determine whether the two fingers are respectively located in the left area and the right area of the user interface.
  • the target area includes a first target area and a second target area
  • the application program determines whether the double touch points corresponding to the two fingers are located in the first target area and the second target area, respectively.
  • the contact point corresponding to the user's left hand finger is in the first target area (that is, the left area of the user interface), and the contact point corresponding to the right hand finger is in the second target area (that is, the right area of the user interface).
  • Step 1404 Determine the sliding displacement of the two fingers.
  • the application program judges the sliding displacement of the double touch point corresponding to the two fingers on the user interface.
  • the sliding displacement of the double contact point is a lateral sliding displacement or a vertical sliding displacement.
  • Horizontal sliding displacement refers to sliding in a direction parallel to the length direction of the user interface
  • vertical sliding displacement refers to sliding in a direction parallel to the width direction of the user interface.
  • Step 1404a Determine whether the abscissa displacement of the two-finger sliding reaches the abscissa displacement threshold.
  • a two-finger lateral sliding operation is taken as an example for description.
  • the coordinates of the first starting position of the first contact point in the first target area and the second starting position of the second contact point in the second target area are acquired.
  • the abscissa displacement threshold is two unit lengths
  • the first starting position coordinate of the first contact point is (-1, 1)
  • the second starting position coordinate is (1, 1)
  • the first contact point The coordinates of the first end position of is (-4,1)
  • the coordinates of the second end position of the second contact point are (4,1)
  • the abscissa displacement of the first contact point and the abscissa displacement of the second contact point are both
  • Three unit lengths are greater than the abscissa displacement threshold (two unit lengths)
  • the ordinate of the first contact point and the ordinate of the second contact point do not produce displacement during the sliding process of the contact point
  • the application program judges the double The finger sliding operation is a two-finger horizontal sliding operation.
  • step 1404b it is judged whether the ordinate of the two-finger sliding has reached the ordinate displacement threshold.
  • a two-finger vertical sliding operation is taken as an example for description.
  • the coordinates of the first starting position of the first contact point in the first target area and the second starting position of the second contact point in the second target area are acquired. Coordinates; when the first contact point and the second contact point stop sliding, obtain the first end position coordinates of the first contact point in the first target area and the second end position coordinates of the second contact point in the second target area; when When the ordinate displacement of the first contact point is greater than the ordinate displacement threshold, and the ordinate displacement of the second contact point is greater than the ordinate displacement threshold, it is determined that a two-finger lateral sliding operation is received on the target area.
  • the ordinate displacement threshold is two unit lengths
  • the first starting position coordinate of the first contact point is (-1, 1)
  • the second starting position coordinate is (1, 1)
  • the first contact point The first end position coordinates of is (-1, -3)
  • the second end position coordinates of the second contact point are (1, -3)
  • the ordinate displacement of the first contact point and the ordinate displacement of the second contact point All three unit lengths are greater than the ordinate displacement threshold (two unit lengths)
  • the abscissa of the first contact point and the abscissa of the second contact point have no displacement during the sliding process of the contact point
  • the two-finger sliding operation is a two-finger vertical sliding operation.
  • Step 1405a control the virtual object to throw a bomb.
  • the application program controls the virtual object to throw bombs according to the user's two-finger horizontal sliding operation.
  • Step 1405b control the virtual object to cancel bomb throwing.
  • the application program controls the virtual object to cancel the bomb throwing.
  • the application program determines that the shortcut operation performed by the user is a two-finger sliding operation, if the user does not own the virtual item (such as a bomb), the virtual object is controlled to cancel the bomb-throwing operation.
  • Step 1406 Determine whether the virtual object is in a squat state.
  • the body posture of the virtual object in the virtual environment is a squatting state.
  • Step 1407a control the virtual object to get up.
  • the vertical sliding operation with two fingers controls the virtual object to get up in the virtual environment.
  • step 1407b the virtual object remains in its original state.
  • the virtual object in the virtual environment when the body posture of the virtual object in the virtual environment is not a squat state, for example, the body posture of the virtual object is a standing state, the virtual object remains standing after the user performs a two-finger vertical sliding operation.
  • the double-click operation can also control the virtual object to install the accessories corresponding to the virtual props
  • the double-click and long-press operation can also control the virtual object to continuously run, jump and other actions
  • the two-finger horizontal sliding operation can also control the virtual object to pick up the virtual props
  • Pushing open windows, opening doors, and two-finger vertical sliding operations can control virtual objects to squat, lie down, and roll.
  • the foregoing embodiment describes the foregoing method based on an application scenario of a game, and the foregoing method is exemplarily described in an application scenario of military simulation below.
  • Simulation technology is a model technology that uses software and hardware to simulate the real environment through experiments to reflect the behavior or process of the system.
  • the military simulation program is a program specially constructed for military applications by using simulation technology to perform quantitative analysis on combat elements such as sea, land, and air, weapon equipment performance, and combat operations, and then accurately simulate the battlefield environment, present the battlefield situation, and realize the combat system Evaluation and decision-making assistance.
  • soldiers set up a virtual battlefield on the terminal where the military simulation program is located, and compete in a team.
  • Soldiers control virtual objects in the battlefield virtual environment to perform at least one operation of walking, running, climbing, driving, shooting, throwing, reconnaissance, and close combat in the battlefield virtual environment.
  • the virtual environment of the battlefield includes at least one natural form of flat land, mountains, plateaus, basins, deserts, rivers, lakes, oceans, vegetation, and location forms such as buildings, vehicles, ruins, and training grounds.
  • Virtual objects include: virtual characters, virtual animals, cartoon characters, etc. Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • the virtual object a controlled by the soldier A performs corresponding activities in the virtual environment.
  • the target area includes the first target area and the second target area.
  • Soldier A’s two-handed fingers are in the first target area and the second target area respectively.
  • the second target area illustratively, the fingers of soldier A's left hand are in the first target area, and the fingers of his right hand are in the second target area.
  • the left and right fingers of soldier A slide upwards in their respective target areas at the same time (as shown by the arrows in Figure 7) to control the virtual object a to get up in the virtual environment.
  • the soldier A performs a two-finger vertical sliding operation in the target area (the sliding direction is opposite to the direction shown in FIG. 7), then the virtual object a is controlled Squat in a virtual environment.
  • the virtual prop used by the virtual object a is a sniper rifle, and the sniper rifle is in an automatically activated state.
  • Soldier A performs two click operations on the target area 131 on the user interface of the military simulation program (as shown in Figure 2).
  • the military simulation program judges that the two-click operation is a double-click operation, and if the double-click operation is in the target area 131, the soldier A can open the scope corresponding to the sniper rifle through the double-click operation.
  • soldier A can close the scope corresponding to the sniper rifle by double-clicking in the target area.
  • the virtual prop used by the virtual object a is a submachine gun, and the sniper gun is in a manually activated state.
  • Soldier A performs a double-click and long-press operation on the target area 131 on the user interface of the military simulation program (as shown in Figure 6).
  • the military simulation program determines whether the two-click operation of soldier A is a double-click operation. If the two-click operation is a double-click operation Operation, the military simulation program continues to determine whether the pressing operation of soldier A is a long-press operation. When the duration of the pressing operation of soldier A is greater than the duration threshold, it is determined that the shortcut operation on the target area 131 is a double-click and long-press operation.
  • the military simulation program controls the virtual object a to start the submachine gun (that is, use the submachine gun to fire, as shown in Figure 9).
  • the military simulation program controls the virtual object a to turn off the firing function of the submachine gun, and the duration of the long-press operation is the firing duration of the submachine gun.
  • the virtual prop used by virtual object a is a bomb, and the bomb is owned by virtual object a.
  • Soldier A performs a two-finger horizontal sliding operation on the target area on the user interface of the military simulation program.
  • the target area includes the first target area and the second target area.
  • Soldier A’s two-handed fingers are in the first target area and the second target area respectively.
  • Second target area Illustratively, soldier A's left hand finger is in the first target area, and his right hand finger is in the second target area. The left and right fingers of soldier A slide to the right in their respective target areas at the same time (as shown by the arrow in FIG. 10) to control the virtual object a to throw bombs.
  • soldier A when virtual object a is in a building in the virtual environment, soldier A can also control virtual object a to open doors and windows through a two-finger lateral sliding operation; or, when there are virtual items in the virtual environment, soldier A can also The virtual object a is controlled to pick up virtual items in the virtual environment through a two-finger horizontal sliding operation.
  • applying the above-mentioned method of controlling virtual objects to a military simulation program can improve combat efficiency and help enhance the degree of cooperation between soldiers.
  • Fig. 15 shows a schematic structural diagram of an apparatus for controlling virtual objects provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of the terminal through software, hardware or a combination of the two.
  • the device includes: a display module 1510, a receiving module 1520, a control module 1530, and an acquisition module 1540.
  • the display module 1510 and the receiving module 1530 are Optional module.
  • the display module 1510 is used to display a user interface.
  • the user interface includes a virtual environment screen and an interactive panel area.
  • the virtual environment screen is a screen for observing the virtual environment from the perspective of a virtual object;
  • the receiving module 1520 is configured to receive shortcut operations on a target area on the user interface, where the target area includes an area that belongs to the virtual environment screen and does not belong to the interactive panel area;
  • the control module 1530 is used to control the virtual object to perform corresponding activities in the virtual environment according to the shortcut operation.
  • control module 1530 is further configured to control the virtual object to adjust the body posture in the virtual environment according to the shortcut operation when the body posture of the virtual object meets the first condition.
  • the first condition includes that the body posture of the virtual object is a squat state; the shortcut operation includes a two-finger vertical sliding operation; the receiving module 1520 is further configured to receive the two-finger vertical sliding operation on the target area. Sliding operation; the control module 1530 is also used to control the virtual object to switch from the squatting state to the up state in the virtual environment according to the vertical sliding operation of two fingers when the body posture of the virtual object is in the squatting state.
  • the target area includes a first target area and a second target area; the acquiring module 1540 is configured to acquire the coordinates of the first starting position of the first contact point in the first target area and the coordinates of the second contact point in the first target area.
  • the coordinates of the second starting position of the second target area; the acquiring module 1540 is also used to acquire the first end position of the first contact point in the first target area when the first contact point and the second contact point stop sliding The coordinates and the second end position coordinates of the second contact point in the second target area; the receiving module 1520 is also used for when the ordinate displacement of the first contact point is greater than the ordinate displacement threshold, and the ordinate of the second contact point When the coordinate displacement is greater than the ordinate displacement threshold, it is determined that a two-finger vertical sliding operation is received on the target area.
  • control module 1530 is further configured to control the virtual object to open the accessory corresponding to the virtual item in the virtual environment according to the shortcut operation when the use state of the virtual item meets the second condition; or, The control module 1530 is also used to control the virtual object to start the virtual prop in the virtual environment according to the shortcut operation when the use state of the virtual prop meets the third condition; or, the control module 1530 is also used to use the virtual prop When the state meets the fourth condition, the virtual object is controlled to throw virtual props in the virtual environment according to the shortcut operation.
  • the second condition includes that the virtual item is in an automatically activated state; the shortcut operation also includes a double-click operation; the receiving module 1520 is further configured to receive a double-click operation on the target area; the control module 1530, It is also used to control the virtual object to turn on the sight corresponding to the first virtual item in the virtual environment according to the double-click operation when the first virtual item is in the automatic start state.
  • the acquiring module 1540 is also used to acquire the time of the first click operation and the time of the second click operation on the target area; the receiving module 1510 is also used to When the time interval between the first click operation and the second click operation is less than the time interval threshold, it is determined that a double-click operation is received on the target area.
  • the third condition includes that the virtual item is in a manually activated state; the shortcut operation also includes a double-click and long-press operation; the receiving module 1510 is further configured to receive a double-click and long-press operation on the target area; The control module 1530 is further configured to control the virtual object to start the second virtual item in the virtual environment according to the double-click and long-press operation when the second virtual item is manually turned on.
  • the receiving module 1510 is further configured to receive a pressing operation on the target area after receiving a double-tap operation on the target area; when the duration of the pressing operation is greater than the duration threshold, determine the target A double-click and long-press operation is received on the area.
  • control module 1530 is further configured to control the virtual object to turn off the virtual props when the double-click and long-press operation on the target area is stopped.
  • the fourth condition includes that the virtual object has virtual props; the shortcut operation also includes a two-finger horizontal sliding operation; the receiving module 1510 is further configured to receive a two-finger horizontal sliding operation on the target area; The control module 1530 is also used to control the virtual object to throw the third virtual item in the virtual environment according to the two-finger lateral sliding operation when the virtual object owns the third virtual item.
  • the target area includes a first target area and a second target area; the acquiring module 1540 is further configured to acquire the coordinates of the first starting position and the second contact point of the first contact point in the first target area The coordinates of the second starting position of the second target area; the acquiring module 1540 is also used to acquire the first end of the first contact point in the first target area when the first contact point and the second contact point stop sliding The position coordinates and the second end position coordinates of the second contact point in the second target area; the receiving module 1510 is also used for when the abscissa displacement of the first contact point is greater than the abscissa displacement threshold, and the second contact point is When the abscissa displacement is greater than the abscissa displacement threshold, it is determined that a two-finger lateral sliding operation is received on the target area.
  • FIG. 16 shows a structural block diagram of a computer device 1600 provided by an exemplary embodiment of the present application.
  • the computer device 1600 can be a portable mobile terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, The motion picture expert compresses the standard audio frequency level 4) the player.
  • the computer device 1600 may also be called user equipment, portable terminal, and other names.
  • the computer device 1600 includes a processor 1601 and a memory 1602.
  • the processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1601 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
  • the processor 1601 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1601 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1601 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1602 may include one or more computer-readable storage media, which may be tangible and non-transitory.
  • the memory 1602 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1602 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1601 to implement the method for controlling virtual objects provided in this application. .
  • the computer device 1600 optionally further includes: a peripheral device interface 1603 and at least one peripheral device.
  • the peripheral device includes: at least one of a radio frequency circuit 1604, a touch display screen 1605, a camera 1606, an audio circuit 1607, a positioning component 1608, and a power supply 1609.
  • the peripheral device interface 1603 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1601 and the memory 1602.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1601, the memory 1602, and the peripheral device interface 1603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1604 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1604 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1604 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 1604 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 1604 may also include NFC (Near Field Communication) related circuits, which is not limited in this application.
  • the touch screen 1605 is used to display UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the touch display screen 1605 also has the ability to collect touch signals on or above the surface of the touch display screen 1605.
  • the touch signal may be input to the processor 1601 as a control signal for processing.
  • the touch screen 1605 is used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the touch display screen 1605 may be a flexible display screen, which is arranged on the curved surface or the folding surface of the computer device 1600. Even the touch screen 1605 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the touch screen 1605 can be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode).
  • the camera assembly 1606 is used to capture images or videos.
  • the camera assembly 1606 includes a front camera and a rear camera.
  • the front camera is used to implement video calls or selfies
  • the rear camera is used to implement photos or videos.
  • the camera assembly 1606 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 1607 is used to provide an audio interface between the user and the computer device 1600.
  • the audio circuit 1607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1601 for processing, or input to the radio frequency circuit 1604 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1601 or the radio frequency circuit 1604 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes.
  • the audio circuit 1607 may also include a headphone jack.
  • the positioning component 1608 is used to locate the current geographic location of the computer device 1600 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1608 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 1609 is used to supply power to various components in the computer device 1600.
  • the power source 1609 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery charged through a wired line
  • a wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the computer device 1600 further includes one or more sensors 1610.
  • the one or more sensors 1610 include, but are not limited to: an acceleration sensor 1611, a gyroscope sensor 1612, a pressure sensor 1613, a fingerprint sensor 1614, an optical sensor 1615, and a proximity sensor 1616.
  • the acceleration sensor 1611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the computer device 1600.
  • the acceleration sensor 1611 can be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 1601 may control the touch screen 1605 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 1611.
  • the acceleration sensor 1611 may also be used for the collection of game or user motion data.
  • the gyroscope sensor 1612 can detect the body direction and rotation angle of the computer device 1600, and the gyroscope sensor 1612 can cooperate with the acceleration sensor 1611 to collect the user's 3D actions on the computer device 1600.
  • the processor 1601 can implement the following functions according to the data collected by the gyroscope sensor 1612: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1613 may be arranged on the side frame of the computer device 1600 and/or the lower layer of the touch screen 1605.
  • the pressure sensor 1613 can detect the user's holding signal of the computer device 1600, and perform left and right hand recognition or quick operation according to the holding signal.
  • the pressure sensor 1613 is arranged at the lower layer of the touch display screen 1605, it can control the operability controls on the UI interface according to the user's pressure operation on the touch display screen 1605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1614 is used to collect the user's fingerprint to identify the user's identity according to the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 1614 may be provided on the front, back, or side of the computer device 1600. When the computer device 1600 is provided with a physical button or a manufacturer logo, the fingerprint sensor 1614 can be integrated with the physical button or the manufacturer logo.
  • the optical sensor 1615 is used to collect the ambient light intensity.
  • the processor 1601 may control the display brightness of the touch screen 1605 according to the intensity of the ambient light collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1605 is decreased.
  • the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 according to the ambient light intensity collected by the optical sensor 1615.
  • the proximity sensor 1616 also called a distance sensor, is usually installed on the front of the computer device 1600.
  • the proximity sensor 1616 is used to collect the distance between the user and the front of the computer device 1600.
  • the processor 1601 controls the touch screen 1605 to switch from the on-screen state to the off-screen state; when the proximity sensor 1616 When it is detected that the distance between the user and the front of the computer device 1600 is gradually increasing, the processor 1601 controls the touch screen 1605 to switch from the rest screen state to the bright screen state.
  • FIG. 16 does not constitute a limitation on the computer device 1600, and may include more or less components than shown, or combine certain components, or adopt different component arrangements.
  • the present application also provides a computer device, the computer device comprising: a processor and a memory, the storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set Or the instruction set is loaded and executed by the processor to implement the methods for controlling virtual objects provided by the foregoing method embodiments.
  • an embodiment of the present application also provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method for controlling a virtual object provided in the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute the method for controlling virtual objects provided by the above-mentioned embodiments.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种控制虚拟对象的方法和相关装置,该方法包括:显示用户界面,所述用户界面包括虚拟环境画面和交互面板区,所述虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面(401);接收在所述用户界面上的目标区域上的快捷操作,所述目标区域包括属于所述虚拟环境画面且不属于所述交互面板区的区域(402);根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动(403)。通过在用户界面中设置目标区域,用户可通过在目标区域中进行快捷操作来控制虚拟对象进行对应的活动,无需用户触发活动对应的UI控件,也无需用户记忆UI控件的功能和位置,即可实现根据快捷操作控制虚拟对象在虚拟环境中进行对应的活动。

Description

控制虚拟对象的方法和相关装置
本申请要求于2019年08月23日提交中国专利局、申请号为201910784863.5、申请名称为“控制虚拟对象的方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,特别涉及控制虚拟对象。
背景技术
在基于三维虚拟环境的应用程序中,如第一人称射击类游戏,用户可以操控虚拟环境中的虚拟对象进行行走、奔跑、攀爬、射击、格斗等活动。当用户想控制虚拟对象进行不同的活动时,需要触发用户界面上对应的用户界面UI控件。
在用户界面上存在虚拟对象进行各类活动对应的UI控件,且各类活动对应的UI控件位于用户界面的不同位置,比如,方向按键位于用户界面的左侧、奔跑按键位于用户界面的右侧等,当虚拟对象进行较多的活动时,用户界面上对应活动的UI控件的数量也将增加,用户需要记住各UI控件对应的活动或功能,同时也需要记住各UI控件对应的位置。
发明内容
本申请实施例提供了一种控制虚拟对象的方法和相关装置,可以解决相关技术中控制虚拟对象进行各类活动时,需要触发活动对应的UI控件,用户无法快速控制虚拟对象进行各类活动的问题。所述技术方案如下:
根据本申请的一个方面,提供了一种控制虚拟对象的方法,所述方法包括:
显示用户界面,所述用户界面包括虚拟环境画面和交互面板区,所述虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面;
接收在所述用户界面上的目标区域上的快捷操作,所述目标区域包括属于所述虚拟环境画面且不属于所述交互面板区的区域;
根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动。
根据本申请的另一方面,提供了一种控制虚拟对象的装置,所述装置包括:
显示模块,用于显示用户界面,所述用户界面包括虚拟环境画面和交互面板区,所述虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面;
接收模块,用于接收在所述用户界面上的目标区域上的快捷操作,所述目标区域包括属于所述虚拟环境画面且不属于所述交互面板区的区域;
控制模块,用于根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动。
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上方面所述的控制虚拟对象的方法。
根据本申请的又一方面,提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行以上方面所述的控制虚拟对象的方法。
根据本申请的又一方面,提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行以上方面所述的控制虚拟对象的方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过在用户界面中设置目标区域,用户可通过在目标区域中进行快捷操作来控制虚拟对象进行对应的活动,无需用户触发活动对应的UI控件,也无需用户记忆UI控件的功能和位置,即可实现根据快捷操作控制虚拟对象在虚拟环境中进行对应的活动。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的相关技术中的控制虚拟对象打开瞄准镜的界面示意图;
图2是本申请一个示例性实施例提供的控制虚拟对象打开瞄准镜的界面示意图;
图3是本申请一个示例性实施例提供的实施环境的框图;
图4是本申请一个示例性实施例提供的控制虚拟对象的方法的流程图;
图5是本申请一个示例性实施例提供的虚拟对象的视角对应的摄像机模型 示意图;
图6是本申请一个示例性实施例提供的在用户界面上建立直角坐标系的界面示意图;
图7是本申请一个示例性实施例提供的控制虚拟对象起身的界面示意图;
图8是本申请一个示例性实施例提供的相关技术中控制虚拟对象启动虚拟道具的界面示意图;
图9是本申请一个示例性实施例提供的控制虚拟对象启动虚拟道具的界面示意图;
图10是本申请一个示例性实施例提供的控制虚拟对象投掷虚拟道具的界面示意图;
图11是本申请一个示例性实施例提供的控制虚拟对象打开瞄准镜的方法的流程图;
图12是本申请一个示例性实施例提供的控制虚拟对象关闭瞄准镜的方法的流程图;
图13是本申请一个示例性实施例提供的控制虚拟对象进行开火的方法的流程图;
图14是本申请一个示例性实施例提供的控制虚拟对象起身和投掷虚拟道具的方法的流程图;
图15是本申请一个示例性实施例提供的控制虚拟对象的装置的框图;
图16是本申请一个示例性实施例提供的计算机设备的装置结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行介绍:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。
虚拟对象:是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在三维虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
虚拟道具:是指虚拟对象在虚拟环境中能够使用的道具,虚拟道具包括:虚拟对象使用的虚拟武器及虚拟武器对应的配件、虚拟食物、虚拟药品、衣服、配饰等,本申请实施例以虚拟道具是虚拟武器为例进行说明。虚拟武器包括:手枪、步枪、狙击枪等通用枪械以及弓箭、弩、长矛、匕首、剑、刀、斧子、炸弹、导弹等。
第一人称射击游戏(First-person Shooting,FPS):是指用户能够以第一人称视角进行的射击游戏,游戏中的虚拟环境的画面是以第一虚拟对象的视角对虚拟环境进行观察的画面。在游戏中,至少两个虚拟对象在虚拟环境中进行单局对战模式,虚拟对象通过躲避其他虚拟对象发起的攻击和虚拟环境中存在的危险(比如,毒气圈、沼泽地等)来达到在虚拟环境中存活的目的,当虚拟对象在虚拟环境中的生命值为零时,虚拟对象在虚拟环境中的生命结束,最后存活在虚拟环境中的虚拟对象是获胜方。可选地,该对战以第一个客户端加入对战的时刻作为开始时刻,以最后一个客户端退出对战的时刻作为结束时刻,每个客户端可以控制虚拟环境中的一个或多个虚拟对象。可选地,该对战的竞技模式可以包括单人对战模式、双人小组对战模式或者多人大组对战模式,本申请实施例对对战模式不加以限定。
触发控件:是指一种用户界面UI(User Interface)控件,在应用程序的用户界面上能够看见的任何可视控件或元素,比如,图片、输入框、文本框、按钮、标签等控件,其中一些UI控件响应用户的操作,比如,用户触发手枪对应的攻击控件,则控制虚拟对象使用手枪进行攻击。
本申请中的“装备、携带或装配”虚拟道具指的是虚拟对象拥有的虚拟道具,虚拟对象拥有背包,背包中存在背包格,虚拟道具存放于虚拟对象的背包中,或者,虚拟对象正在使用虚拟道具。
本申请中提供的方法可以应用于虚拟现实应用程序、三维地图程序、军事 仿真程序、第一人称射击游戏(First-person shooting game,FPS)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)等,下述实施例是以在游戏中的应用来举例说明。
基于虚拟环境的游戏往往由一个或多个游戏世界的地图构成,游戏中的虚拟环境模拟现实世界的场景,用户可以操控游戏中的虚拟对象在虚拟环境中进行行走、跑步、跳跃、射击、格斗、驾驶、投掷、起身、俯卧等动作,交互性较强,并且多个用户可以在线组队进行竞技游戏。在游戏对应的应用程序中,用户界面上设置有UI控件,用户通过触发不同的UI控件控制虚拟对象在虚拟环境中进行不同的活动,比如,用户触发移动功能对应的UI控件(方向按键),控制虚拟对象在虚拟环境中进行移动;虚拟对象正在使用手枪,用户触摸开火功能(使用虚拟武器进行攻击)对应的UI控件(开火按键),则控制虚拟对象在虚拟环境中使用手枪进行攻击。UI控件分布在用户界面上的各个区域,用户需要记忆各UI控件的功能及所在的区域,从而能快速控制虚拟对象进行相应的活动。
在相关技术中提供了一种控制虚拟对象打开瞄准镜的方法,图1示出了本申请一个示例性实施例提供的相关技术中控制虚拟对象打开瞄准镜的界面示意图。如图1的(a)所示,在准备打开瞄准镜界面10上显示有:瞄准控件101、攻击控件103和移动控件104,示意性的,瞄准控件101位于准备打开瞄准镜界面10的右侧区域,位于虚拟环境的对应的小地图的下侧。可选地,瞄准控件101在该界面10上的数量是一个,在一些实施例中,瞄准控件101的数量是两个或更多,本申请对瞄准控件101的数量和位置不加以限定。可选地,用户可对瞄准控件101的位置和数量进行自定义设置。比如,用户设置瞄准控件101的数量是三个,位于准备打开瞄准镜界面10的右侧区域。可选地,在使用枪械类需要瞄准目标的虚拟道具时,用户通过触发瞄准控件101来控制虚拟对象打开瞄准镜105。
攻击控件103分别位于准备打开瞄准镜界面10的左侧区域和右侧区域,位于该界面10的左侧区域的攻击控件103位于移动控件104的左上方。可选地,攻击控件103在该界面10上的数量是两个,在一些实施例中,攻击控件103的数量还可以是一个或三个或更多,本申请对攻击控件103的数量和位置不加以限定。 可选地,用户可对攻击控件103的位置和数量进行自定义设置。比如,用户设置攻击控件103的数量是三个,分别位于准备打开瞄准镜界面10的左侧区域、右侧区域和中间区域。
移动控件104位于准备打开瞄准镜界面10的左侧区域,且位于攻击控件103的右下方。可选地,用户可通过触发移动控件104来控制虚拟对象在虚拟环境中进行移动。
在虚拟对象打开瞄准镜105后,用户界面如图1的(b)所示,在打开瞄准镜界面11上还显示有关闭控件102。该关闭控件102位于原瞄准控件101所在的位置,也即瞄准控件101的显示样式被切换为关闭控件102的显示样式。此时,用户控制虚拟对象打开了所使用的枪械对应的瞄准镜105,该枪械由显示在用户界面左侧区域变成显示在用户界面的中央,瞄准镜105显示在打开瞄准镜界面11的正中央,呈瞄准状态。若用户触发该关闭控件102,则关闭该瞄准镜105,打开瞄准镜界面10转变为如图1的(a)所示的准备打开瞄准镜界面10。
基于上述实施例中提供的方法,用户控制的虚拟对象进行对应的动作时,需要触发对应的UI控件,比如,控制虚拟对象打开瞄准镜105,则需要用户触发瞄准控件101,而在控制虚拟对象停止该动作时,还需要用户触发相应的关闭控件102,步骤较为繁琐,且需要用户记忆各UI控件对应的功能和所在的位置,用户可能因忘记动作对应的UI控件和UI控件对应的位置,而延误虚拟对象做出对应的动作。
本申请提供了一种控制虚拟对象的方法,图2示出了本申请一个示例性实施例提供的控制虚拟对象的界面示意图。
以用户控制虚拟对象在虚拟环境中打开虚拟道具对应的配件为例进行说明。在一个示例中,虚拟对象使用的虚拟道具是狙击枪,该虚拟道具对应的配件是瞄准镜113。相比于图1所示,用户界面上显示有瞄准控件101和关闭控件102,图2中的打开瞄准镜界面110中未显示任何关于打开瞄准镜113的控件。用户可通过快捷操作控制虚拟对象快速打开瞄准镜113,可选地,该快捷操作是双击操作。可选地,在用户界面上设置有目标区域131,该目标区域131包括虚拟环境画面且不属于交互面板区112的区域。示意性的,该目标区域131是用户界面中的右侧区域,且该右侧区域是不属于交互面板区112的区域。可选地, 用户在目标区域131中进行双击操作后,用户界面显示打开瞄准镜界面110,虚拟对象使用的虚拟道具(狙击枪)111的瞄准镜113位于用户界面的正中央,呈瞄准状态,瞄准镜113已打开,与图1的(b)中所示的用户通过点击瞄准控件101来打开瞄准镜105的效果相同。
可选地,当用户需要关闭瞄准镜113时,用户可采用快捷操作关闭瞄准镜113。示意性的,关闭瞄准镜113的快捷操作是双击操作。可选地,关闭瞄准镜113与打开瞄准镜113的目标区域131是同一区域,或不同区域。当用户在目标区域131中进行双击操作时,控制虚拟对象将瞄准镜113关闭,可选地,虚拟对象使用的虚拟道具(狙击枪)转变为在打开瞄准镜113之前的状态。
可选地,用户可设置目标区域的范围,比如,用户设置以用户界面的中心为圆心,半径为10个单位长度的圆形区域是目标区域,只要在该目标区域131范围内进行快捷操作,则用户可控制虚拟对象进行与快捷操作对应的活动。可选地,快捷操作可以是单击操作、双击操作、滑动操作、拖动操作、长按操作、双击长按操作、双指滑动操作中的至少一种。可选地,用户可设置虚拟对象进行活动时对应的活动控件,比如,当用户使用快捷操作控制虚拟对象时,在用户界面中取消显示活动对应的控件,或者,用户设置活动对应的控件的位置和数量等。可以理解的是,用户通过在目标区域131进行不同的快捷操作控制虚拟对象进行对应的活动。
图3示出了本申请一个示例性实施例提供的计算机系统的结构框图。该计算机系统100包括:第一终端120、服务器140和第二终端160。
第一终端120安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。第一终端120是第一用户使用的终端,第一用户使用第一终端120控制位于虚拟环境中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、射击、投掷、使用虚拟道具中的至少一种。示意性的,虚拟对象是虚拟人物,比如仿真人物对象或动漫人物对象。
第一终端120通过无线网络或有线网络与服务器140相连。
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的 至少一种。示意性的,服务器140包括处理器144和存储器142,存储器142又包括显示模块1421、控制模块1422和接收模块1423。服务器140用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。
第二终端160安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。第二终端160是第二用户使用的终端,第二用户使用第二终端160控制位于虚拟环境中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、射击、投掷、使用虚拟道具中的至少一种。示意性的,虚拟对象是虚拟人物,比如仿真人物对象或动漫人物对象。
可选地,第一用户控制的虚拟对象和第二用户控制的虚拟对象处于同一虚拟环境中。可选地,第一用户控制的虚拟对象和第二用户控制的虚拟对象可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同控制系统平台的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。第一终端120和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。以下实施例以终端包括智能手机来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
图4示出了本申请一个示例性实施例提供的控制虚拟对象的方法的流程图,该方法可应用于如图3所示的计算机系统中的第一终端120或第二终端160中或该计算机系统中的其它终端中。该方法包括如下步骤:
步骤401,显示用户界面,用户界面包括虚拟环境画面和交互面板区,虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面。
可选地,虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面。视角是指以虚拟对象的第一人称视角或者第三人称视角在虚拟环境中进行观察时的观察角度。可选地,本申请的实施例中,视角是在虚拟环境中通过摄像机模型对虚拟对象进行观察时的角度。
可选地,摄像机模型在虚拟环境中对虚拟对象进行自动跟随,即,当虚拟对象在虚拟环境中的位置发生改变时,摄像机模型跟随虚拟对象在虚拟环境中的位置同时发生改变,且该摄像机模型在虚拟环境中始终处于虚拟对象的预设距离范围内。可选地,在自动跟随过程中,摄像头模型和虚拟对象的相对位置不发生变化。
摄像机模型是指在虚拟环境中位于虚拟对象周围的三维模型,当采用第一人称视角时,该摄像机模型位于虚拟对象的头部附近或者位于虚拟对象的头部;当采用第三人称视角时,该摄像机模型可以位于虚拟对象的后方并与虚拟对象进行绑定,也可以位于与虚拟对象相距预设距离的任意位置,通过该摄像机模型可以从不同角度对位于虚拟环境中的虚拟对象进行观察,可选地,该第三人称视角为第一人称的过肩视角时,摄像机模型位于虚拟对象(比如虚拟人物的头肩部)的后方。可选地,除第一人称视角和第三人称视角外,视角还包括其他视角,比如俯视视角;当采用俯视视角时,该摄像机模型可以位于虚拟对象头部的上空,俯视视角是以从空中俯视的角度进行观察虚拟环境的视角。可选地,该摄像机模型在虚拟环境中不会进行实际显示,即,在用户界面显示的虚拟环境中不显示该摄像机模型。
对该摄像机模型位于与虚拟对象相距预设距离的任意位置为例进行说明,可选地,一个虚拟对象对应一个摄像机模型,该摄像机模型可以以虚拟对象为旋转中心进行旋转,如:以虚拟对象的任意一点为旋转中心对摄像机模型进行旋转,摄像机模型在旋转过程中的不仅在角度上有转动,还在位移上有偏移,旋转时摄像机模型与该旋转中心之间的距离保持不变,即,将摄像机模型在以该旋转中心作为球心的球体表面进行旋转,其中,虚拟对象的任意一点可以是虚拟对象的头部、躯干、或者虚拟对象周围的任意一点,本申请实施例对此不 加以限定。可选地,摄像机模型在对虚拟对象进行观察时,该摄像机模型的视角的中心指向为该摄像机模型所在球面的点指向球心的方向。
可选地,该摄像机模型还可以在虚拟对象的不同方向以预设的角度对虚拟对象进行观察。
示意性的,请参考图5,在虚拟对象11中确定一点作为旋转中心12,摄像机模型围绕该旋转中心12进行旋转,可选地,该摄像机模型配置有一个初始位置,该初始位置为虚拟对象后上方的位置(比如脑部的后方位置)。示意性的,如图5所示,该初始位置为位置13,当摄像机模型旋转至位置14或者位置15时,摄像机模型的视角方向随摄像机模型的转动而进行改变。
可选地,虚拟环境画面显示的虚拟环境包括:山川、平地、河流、湖泊、海洋、沙漠、天空、植物、建筑、车辆中的至少一种元素。
可选地,用户界面包括交互面板区112,如图2所示。交互面板区112中设置有控制虚拟对象进行活动的UI控件、发送消息控件、语音控件、表情控件、设置控件等等,上述控件用于用户控制虚拟对象在虚拟环境中进行对应的活动,或者,向同队的队友发送消息(包括:文字形式的消息、语音形式的消息和表情形式的消息),或者,设置虚拟对象在虚拟环境中的动作属性(比如,奔跑时的速度)或设置虚拟武器的属性(比如,枪械的灵敏度、攻击范围、杀伤力等),或者,显示用户控制的虚拟对象在虚拟环境中的位置(示意性的,在用户界面上显示有虚拟环境的缩略地图)。用户通过交互面板区112即时了解虚拟对象当前的状态,并且可随时通过交互面板区112控制虚拟对象进行对应的活动。在一些实施例中,交互面板区的形状是矩形或圆形,或者交互面板区的形状与用户界面上的UI控件的形状对应,本申请对交互面板区的形状不加以限定。
步骤402,接收在用户界面上的目标区域上的快捷操作,目标区域包括属于虚拟环境画面且不属于交互面板区的区域。
可选地,快捷操作包括:双击操作、双击长按操作、双指横向滑动操作、双指竖向滑动操作中的至少一项。
可选地,以虚拟环境画面属于的区域为A区域,交互面板区为B区域为例进行说明,则目标区域包括以下形式:
第一,目标区域是A区域对应的区域。
目标区域包括属于虚拟环境画面且不属于交互面板区的区域。示意性的,如图2所示,用户可在不包括UI控件及小地图所在的区域进行快捷操作,比如,在移动控件和攻击控件之间的区域,该区域属于虚拟环境画面且不包括交互面板区的区域。
第二,目标区域是A区域和B区域对应的区域。
目标区域包括虚拟环境画面和交互面板区对应的区域。示意性的,如图2所示,用户可同时在虚拟环境画面和交互面板区对应的区域进行快捷操作,比如,用户可在UI控件或小地图所在的区域上进行快捷操作,即用户可在用户界面110上的任意位置处进行快捷操作。
第三,目标区域是A区域的部分区域和B区域对应的区域。
目标区域包括部分虚拟环境画面对应的区域和交互面板区域对应的区域。示意性的,如图2所示,用户可在虚拟环境画面的右侧区域和交互面板区进行快捷操作,也即用户可在用户界面110的右侧区域和用户界面110的左侧区域中的UI控件对应的交互面板区进行快捷操作。可选地,部分虚拟环境画面对应的区域可以是虚拟环境画面的左侧区域、右侧区域、上侧区域和下侧区域中的任意一侧区域。
第四,目标区域是B区域的部分区域和A区域对应的区域。
目标区域是虚拟环境画面对应的区域和部分交互面板区对应的区域,示意性的,如图2所示,用户可在虚拟环境画面对应的区域和右侧交互面板区进行快捷操作,也即用户可在不包括用户界面110的左侧交互面板区对应的区域进行快捷操作。可选地,部分交互面板区可以是左侧交互面板区、右侧交互面板区、上侧交互面板区、下侧交互面板区中的任意一侧区域。
第五,目标区域是A区域的部分区域和B区域的部分区域对应的区域。
目标区域是部分虚拟环境画面对应的区域和部分交互面板区对应的区域,示意性的,如图2所示,用户可在左侧虚拟环境画面和左侧交互面板区对应的区域进行快捷操作,也即用户可在用户界面110的左侧区域进行快捷操作。示意性的,用户可在右侧虚拟环境画面和左侧交互面板区对应的区域进行快捷操作,也即用户可在不包括右侧交互面板区对应的区域和左侧虚拟环境画面对应的区域进行快捷操作,用户可在移动控件和攻击控件上进行快捷操作,也可在 右侧虚拟环境画面对应的区域进行快捷操作。
可选地,基于上述目标区域的表现形式,用户界面上可对交互面板区不进行显示,也即隐藏交互面板区。
图6示出了本申请一个示例性实施例提供的用户界面上的目标区域的界面示意图。示意性的,以用户界面130的中心为原点建立直角坐标系,目标区域131设置在用户界面130的右侧区域,在直角坐标系中的第一象限和第四象限之间,该目标区域131的形状是椭圆形。在一个示例中,用户在目标区域131中进行双击操作,则控制虚拟对象打开虚拟道具对应的配件。
可选地,目标区域131还可以是用户界面130上非面板交互区以外的任何区域。比如,用户界面130的上侧区域,该区域是指用户界面130的左侧边缘至右侧UI控件之间对应的区域;或者,目标区域131是虚拟对象使用的虚拟道具所在的位置对应的区域,比如,目标区域131是虚拟对象使用的虚拟武器(狙击枪)对应的区域。
示意性的,用户在目标区域131中进行不同的快捷操作,控制虚拟对象进行不同的活动,比如,用户在目标区域131中进行双击操作,控制虚拟对象打开虚拟道具对应的配件(比如,虚拟对象使用的狙击枪对应的瞄准镜)。
步骤403,根据快捷操作控制虚拟对象在虚拟环境中进行对应的活动。
可选地,当虚拟对象的身体姿态满足第一条件时,根据快捷操作控制虚拟对象在虚拟环境中调整身体姿态。可选地,第一条件包括虚拟对象的身体姿态是下蹲状态,快捷操作包括双指竖向滑动操作。虚拟对象可在虚拟环境中进行奔跑、跳跃、攀爬、起身、匍匐等动作,虚拟对象在虚拟环境中的身体姿态可以是趴下、下蹲、站立、坐、卧、跪等状态,示意性的,以虚拟对象从虚拟环境中起身为例进行说明,图7示出了本申请一个示例性实施例提供的控制虚拟对象起身的界面示意图。
当虚拟对象的身体姿态是下蹲状态时,由于虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面,用户界面150显示的是虚拟对象的仰视视角,也即天空对应的画面。可选地,用户的手指同时在用户界面150上向上滑动(如图7中的箭头所示),随着用户的滑动操作用户界面150中的虚拟环境画面将发生变化,用户界面150中天空对应的画面所占的面积比例将减小,虚拟对象的 视角是平视状态,用户界面150包括虚拟环境的地面和景物,用户通过双指竖向滑动操作控制虚拟对象在虚拟环境中起身。
在一些实施例中,用户通过双指竖向滑动操作控制虚拟对象在虚拟环境中进行跳跃或攀爬,或者,用户通过双击操作或其他快捷操作控制虚拟对象在虚拟环境中起身(当虚拟对象在虚拟环境中的身体姿态是下蹲状态时),本申请对此不加以限定。
可选地,当虚拟道具的使用状态满足第二条件时,根据快捷操作控制虚拟对象在虚拟环境中开启虚拟道具对应的配件。可选地,第二条件包括虚拟道具是自动启动状态,快捷操作还包括双击操作。自动启动状态是指虚拟道具在无需触发操作的情况下,可自动启动,比如,冲锋枪在无需用户触发攻击控件或开火控件时,在瞄准镜113打开时,将自动攻击或开火。以虚拟道具是虚拟武器,以虚拟道具对应的配件是瞄准镜为例进行说明,如图2所示。示意性的,用户控制虚拟对象使用的虚拟武器111是冲锋枪。当虚拟道具是自动启动状态时,用户在目标区域进行双击操作,控制虚拟对象在虚拟环境中开启冲锋枪对应的瞄准镜113。冲锋枪显示在用户界面110的正中央,瞄准镜113也显示在用户界面110的正中央。
可选地,当虚拟道具是虚拟武器时,虚拟道具对应的配件还可以是弹匣。示意性的,以虚拟武器是枪械类的武器为例进行说明,当虚拟武器是自动启动状态时,用户在目标区域进行双击操作,控制虚拟对象在虚拟环境中为枪械类的虚拟武器安装弹匣。
在一些实施例中,用户可通过双击操作控制虚拟对象安装虚拟道具的配件,或者,用户通过双击长按操作或其他快捷操作控制虚拟对象在虚拟环境中打开虚拟道具对应的配件,本申请对此不加以限定。
可选地,当虚拟道具的使用状态满足第三条件时,根据快捷操作控制虚拟对象在虚拟环境中启动虚拟道具。可选地,第三条件包括虚拟道具是手动启动状态,快捷操作还包括双击长按操作。手动启动状态是指虚拟道具需通过用户的触发操作后才能启动,比如,用户需要触发攻击控件后才能控制虚拟对象使用虚拟武器进行攻击。图8示出了本申请一个示例性实施例提供的相关技术中控制虚拟对象进行攻击的界面示意图。在攻击界面12中显示有两个攻击控件 103,分别位于用户界面的左侧区域和右侧区域。示意性的,用户控制虚拟对象使用的虚拟道具是冲锋枪,用户需要触发两个攻击控件103中的至少一个攻击控件来控制虚拟对象进行攻击(也即冲锋枪开火)。
图9示出了本申请一个示例性实施例提供的控制虚拟对象启动虚拟道具的界面示意图,在攻击界面170中显示有目标区域131和攻击控件114。用户可通过在目标区域131中进行双击长按操作控制虚拟对象启动虚拟道具,示意性的,以虚拟道具是狙击枪,目标区域131是攻击界面170右侧的椭圆形区域为例进行说明。当用户在目标区域131中进行双击长按操作时,控制虚拟对象使用狙击枪进行攻击(也即开火)。可选地,用户还可以通过触发攻击控件114控制虚拟对象使用狙击枪进行攻击。可选地,用户可设置攻击控件114的数量,比如,在用户界面上的攻击控件114的数量是两个,或者,游戏对应的应用程序默认设置攻击控件114的数量,或者,后台服务器根据用户的使用习惯、历史记录智能设置攻击控件114的数量,本申请对此不加以限定。可选地,用户还可调整攻击控件114在用户界面的位置,根据实际的游戏情况,实时调整攻击控件114在用户界面的位置,避免攻击控件114的位置对用户造成视线干扰。
在一些实施例中,用户可通过双击长按操作控制虚拟对象持续奔跑,或者,用户通过双击操作或其他快捷操作控制虚拟对象在虚拟环境中启动虚拟道具,本申请对此不加以限定。
可选地,当虚拟道具的使用状态满足第四条件时,根据快捷操作控制虚拟对象在虚拟环境中投掷虚拟道具。可选地,第四条件包括虚拟对象拥有虚拟道具,快捷操作还包括双指横向滑动操作。虚拟对象拥有虚拟道具是指虚拟对象装配了该虚拟道具,该虚拟道具位于虚拟对象的背包格中,或者,正在被虚拟对象使用。示意性的,以虚拟道具是炸弹为例进行说明。
图10示出了本申请一个示例性实施例提供的控制虚拟对象投掷虚拟道具的界面示意图,在投掷界面170中显示有虚拟对象拥有的炸弹115。可选地,用户的手指在目标区域上同时向右滑动,当用户停止滑动时,控制虚拟对象将拥有的炸弹投掷出去。可选地,用户还可通过触发武器控件115控制虚拟对象将炸弹投掷出去。可选地,当虚拟对象在下蹲的状态下,用户通过双指横向滑动操作控制虚拟对象起身,并且将炸弹投掷出去。
在一些实施例中,用户通过双指竖向滑动操作控制虚拟对象拾取虚拟环境中的虚拟道具,或将虚拟对象装配的虚拟道具卸下,或者,用户通过双指横向滑动操作控制或其他快捷操作控制虚拟对象在虚拟环境中投掷虚拟道具,本申请对此不加以限定。
综上所述,通过在用户界面中设置目标区域,用户可通过在目标区域中进行快捷操作来控制虚拟对象进行对应的活动,无需用户触发活动对应的UI控件,也无需用户记忆UI控件的功能和位置,即可实现根据快捷操作控制虚拟对象在虚拟环境中进行对应的活动。
图11示出了本申请一个示例性实施例提供的控制虚拟对象打开瞄准镜的方法的流程图。该方法可应用于如图3所示的计算机系统中的第一终端120或第二终端160中或该计算机系统中的其它终端中。该方法包括如下步骤:
步骤1101,选择自动开火状态,接收双击操作。
示意性的,以虚拟道具是狙击枪为例,用户选择狙击枪的状态是自动开火状态,或者,用户将狙击枪设置为自动开火状态,也即虚拟对象装备狙击枪时,该狙击枪已是自动开火状态,无需设置。
步骤1102,判断是否是双击操作。
示意性的,以打开瞄准镜的快捷操作是双击操作为例进行说明。在用户进行双击操作后,游戏对应的应用程序判断该操作是否为双击操作。可选地,应用程序获取用户第一次点击操作的时间和第二次点击操作的时间,当第一次点击操作和第二次点击操作之间的时间间隔小于时间间隔阈值时,判断该操作为双击操作。示意性的,时间间隔阈值是500毫秒,当第一次点击操作和第二次点击操作之间的时间间隔小于500毫秒时,判断接收到的操作是双击操作。
步骤1103,判断双击操作是否在旋转区域。
在一些实施例中,目标区域也被命名为旋转区域,本申请对目标区域的名称不加以限定。可选地,接收双击操作的区域在旋转区域的范围内,则判断该双击操作在旋转区域中。示意性的,用户界面是长度为100个单位长度,宽度为50个单位长度的矩形区域。在一个示例中,该旋转区域的范围是以长度为大于50个单位长度且小于100个单位长度,宽度为50个单位长度所形成的矩形区域,也即以用户界面的中心为界线的右侧区域。如图6所示,以用户界面的中 心为原点建立直角坐标系,该直角坐标系的第一象限和第四象限对应的区域即为旋转区域,目标区域131在该第一象限和第四象限对应的区域的范围之内,则用户在目标区域131可进行快捷操作控制虚拟对象。
步骤1104,执行开镜操作。
示意性的,以虚拟道具是狙击枪为例,当狙击枪是自动开火状态时,在目标区域接收到用户的双击操作后,游戏对应的应用程序控制虚拟对象开启狙击枪的瞄准镜。
图12示出了本申请一个示例性实施例提供的控制虚拟对象关闭瞄准镜的方法的流程图。该方法可应用于如图3所示的计算机系统中的第一终端120或第二终端160中或该计算机系统中的其它终端中。该方法包括如下步骤:
步骤1201,接收关镜操作。
示意性的,以虚拟道具是狙击枪为例,用户需要关闭狙击枪的瞄准镜时,在用户界面上的旋转区域中进行快捷操作,应用程序根据用户的快捷操作关闭狙击枪的瞄准镜。
步骤1202,判断是否是双击操作。
在一个示例中,时间间隔阈值是900毫秒,用户第一次点击操作和第二次点击操作之间的时间间隔是500毫秒,小于时间间隔阈值,则两次点击操作被判定为双击操作。可选地,若第一次点击操作和第二次点击操作之间的时间间隔是1秒,则应用程序将本次点击操作记为第一次点击事件,用户需重新进行两次点击操作,根据第二次点击事件的两次点击操作,重新计算两次点击操作之间的时间间隔。
步骤1203,判断双击操作是否在旋转区域。
示意性的,用户界面的长度是100个单位长度,宽度是50个单位长度。在一个示例中,旋转区域的范围是以长度是20个单位长度和30个单位长度之间,宽度为45个单位长度所形成的矩形区域。用户在该区域范围之内进行双击操作,则应用程序判断该双击操作在旋转区域。
步骤1204,执行关镜操作。
在一个示例中,在狙击枪的瞄准镜已打开的状态下,用户在旋转区域进行双击操作,则控制虚拟对象将狙击枪的瞄准镜进行关闭。
图13示出了本申请一个示例性实施例提供的控制虚拟对象开火的方法的流程图。该方法可应用于如图3所示的计算机系统中的第一终端120或第二终端160中或该计算机系统中的其它终端中。该方法包括如下步骤:
步骤1301,选择手动开火。
可选地,用户可设置虚拟道具的使用模式是手动启动模式,或者,用户在选择虚拟道具时,该虚拟道具已经是手动启动模式(虚拟道具的默认设置),手动启动模式是指用户需要触发相应的UI控件或进行相应的操作后,才能控制虚拟对象启动虚拟道具。示意性的,以虚拟道具是冲锋枪为例,用户将冲锋枪的开火模式选择为手动开火,在用户触发开火或攻击控件时,控制虚拟对象使用冲锋枪进行攻击(也即冲锋枪发射子弹)。
步骤1302,判断是否是双击操作。
在一个示例中,用户的第一次点击操作和第二次点击操作的时间间隔是300毫秒,小于时间间隔阈值500毫秒,则判断该用户的第一次点击操作和第二次点击操作是双击操作。
步骤1303,判断双击操作是否在旋转区域。
示意性的,用户界面的长度是100个单位长度,宽度是50个单位长度。在一个示例中,旋转区域是以长度为大于50个单位长度且小于100个单位长度,宽度为50个单位长度所形成的矩形区域,也即以用户界面的中心为界线的右侧区域。用户在该旋转区域内进行双击操作。
步骤1304,判断是否接收了长按操作。
在接收目标区域上的双击操作后,用户还需在目标区域(也即旋转区域)内进行按压操作,当按压操作的时长大于持续时长阈值时,确定目标区域上接收到长按操作。示意性的,持续时长阈值是200毫秒,用户的按压操作的时长是300毫秒,大于持续时长阈值,则判断用户进行的按压操作是长按操作。
步骤1305,执行开火操作。
当用户在目标区域内进行双击操作和长按操作后,根据该双击长按操作控制虚拟对象执行开火操作。在一个示例中,用户在目标区域进行双击长按操作,控制虚拟对象使用冲锋枪进行开火操作。
步骤1306,判断是否停止长按操作。
示意性的,以虚拟道具是冲锋枪为例,当用户控制虚拟对象使用冲锋枪开火后,冲锋枪根据用户的长按操作时长来调整开火的时间,比如,在开火状态下,用户的长按操作持续时长是3秒,则冲锋枪的开火时间是3秒。
步骤1307,执行停火操作。
可选地,在目标区域上的双击长按操作停止时,控制虚拟对象关闭虚拟道具。
在一些实施例中,双击长按操作也被命名为双击和长按操作,本申请对快捷操作的名称不加以限定。可以理解的是,用户在目标区域内进行的双击长按操作是先进行双击操作,然后进行长按操作,在虚拟道具的启动状态下,长按操作的时长是虚拟道具呈开启状态的时长,当停止长按操作时,虚拟道具将被关闭。
图14示出了本申请一个示例性实施例提供的双指滑动操作控制虚拟对象的方法的流程图。该方法可应用于如图3所示的计算机系统中的第一终端120或第二终端160中或该计算机系统中的其它终端中。该方法包括如下步骤:
步骤1401,接收双指滑动操作。
用户在用户界面上进行双指滑动操作,可选地,该双指滑动操作包括双指横向滑动操作和双指竖向滑动操作。
步骤1402,判断双指是否同时在用户界面上。
应用程序判断用户的双指对应得双接触点是否同时在用户界面上,可选地,若该双接触点未同时在用户界面上,可判断为双击操作。
步骤1403,判断双指是否分别位于用户界面的左侧区域和右侧区域。
可选地,目标区域包括第一目标区域和第二目标区域,应用程序判断双指对应的双接触点是否分别位于第一目标区域和第二目标区域。可选地,用户左手手指对应的接触点在第一目标区域(也即用户界面的左侧区域),右手手指对应的接触点在第二目标区域(也即用户界面的右侧区域)。
步骤1404,判断双指的滑动位移。
应用程序判断双指对应的双接触点在用户界面上的滑动位移。可选地,双接触点的滑动位移是横向滑动位移或竖向滑动位移。横向滑动位移是指沿与用户界面的长度方向平行的方向进行滑动,竖向滑动位移是指沿与用户界面的宽 度方向平行的方向进行滑动。
步骤1404a,判断双指滑动的横坐标位移是否达到横坐标位移阈值。
示意性的,以双指横向滑动操作为例进行说明。可选地,在用户的双指接触用户界面上的目标区域时,获取第一接触点在第一目标区域的第一起始位置坐标和第二接触点在第二目标区域的第二起始位置坐标;当第一接触点和第二接触点停止滑动时,获取第一接触点在第一目标区域的第一终止位置坐标和第二接触点在第二目标区域的第二终止位置坐标;当第一接触点的横坐标位移大于横坐标位移阈值,且,第二接触点的横坐标位移大于横坐标位移阈值时,确定目标区域上接收到双指横向滑动操作。在一个示例中,横坐标位移阈值是两个单位长度,第一接触点的第一起始位置坐标是(-1,1),第二起始位置坐标是(1,1),第一接触点的第一终止位置坐标是(-4,1),第二接触点的第二终止位置坐标是(4,1),第一接触点的横坐标位移和第二接触点的横坐标位移均是三个单位长度,大于横坐标位移阈值(两个单位长度),而第一接触点的纵坐标和第二接触点的纵坐标在接触点滑动的过程中未产生位移,则应用程序判断该双指滑动操作是双指横向滑动操作。
步骤1404b,判断双指滑动纵坐标是否达到纵坐标位移阈值。
示意性的,以双指竖向滑动操作为例进行说明。可选地,在用户的双指接触用户界面上的目标区域时,获取第一接触点在第一目标区域的第一起始位置坐标和第二接触点在第二目标区域的第二起始位置坐标;当第一接触点和第二接触点停止滑动时,获取第一接触点在第一目标区域的第一终止位置坐标和第二接触点在第二目标区域的第二终止位置坐标;当第一接触点的纵坐标位移大于纵坐标位移阈值,且,第二接触点的纵坐标位移大于纵坐标位移阈值时,确定目标区域上接收到双指横向滑动操作。在一个示例中,纵坐标位移阈值是两个单位长度,第一接触点的第一起始位置坐标是(-1,1),第二起始位置坐标是(1,1),第一接触点的第一终止位置坐标是(-1,-3),第二接触点的第二终止位置坐标是(1,-3),第一接触点的纵坐标位移和第二接触点的纵坐标位移均是三个单位长度,大于纵坐标位移阈值(两个单位长度),而第一接触点的横坐标和第二接触点的横坐标在接触点滑动的过程中未产生位移,则应用程序判断该双指滑动操作是双指竖向滑动操作。
步骤1405a,控制虚拟对象投掷炸弹。
可选地,应用程序根据用户进行双指横向滑动操作,控制虚拟对象投掷炸弹。
步骤1405b,控制虚拟对象取消投掷炸弹。
可选地,当第一接触点和第二接触点中任意一个接触点的横坐标位移小于横坐标位移阈值时,应用程序控制虚拟对象取消投掷炸弹。可选地,当应用程序判断用户进行的快捷操作是双指滑动操作时,若用户未拥有该虚拟道具(如,炸弹),则控制虚拟对象取消投掷炸弹。
步骤1406,判断虚拟对象是否是下蹲状态。
示意性的,虚拟对象在虚拟环境中的身体姿态是下蹲状态。
步骤1407a,控制虚拟对象起身。
示意性的,当虚拟对象在虚拟环境中的身体姿态是下蹲状态时,则双指竖向滑动操作控制虚拟对象在虚拟环境中起身。
步骤1407b,虚拟对象保持原状态。
示意性的,当虚拟对象在虚拟环境中的身体姿态不是下蹲状态时,比如,虚拟对象的身体姿态是站立状态,则在用户进行双指竖向滑动操作后,虚拟对象仍保持站立状态。
可选地,双击操作还可控制虚拟对象安装虚拟道具对应的配件、双击长按操作还可控制虚拟对象进行持续奔跑、跳跃等动作、双指横向滑动操作还可控制虚拟对象进行拾取虚拟道具、推开窗户、打开门等动作、双指竖向滑动操作可控制虚拟对象进行下蹲、趴下、翻滚等动作。
上述实施例是基于游戏的应用场景对上述方法进行描述,下面以军事仿真的应用场景对上述方法进行示例性说明。
仿真技术是应用软件和硬件通过模拟真实环境的实验,反映系统行为或过程的模型技术。
军事仿真程序是利用仿真技术针对军事应用专门构建的程序,对海、陆、空等作战元素、武器装备性能以及作战行动等进行量化分析,进而精确模拟战场环境,呈现战场态势,实现作战体系的评估和决策的辅助。
在一个示例中,士兵在军事仿真程序所在的终端建立一个虚拟的战场,并 以组队的形式进行对战。士兵控制战场虚拟环境中的虚拟对象在战场虚拟环境下进行行走、奔跑、攀爬、驾驶、射击、投掷、侦查、近身格斗等动作中的至少一种操作。战场虚拟环境包括:平地、山川、高原、盆地、沙漠、河流、湖泊、海洋、植被中的至少一种自然形态,以及建筑物、车辆、废墟、训练场等地点形态。虚拟对象包括:虚拟人物、虚拟动物、动漫人物等,每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
基于上述情况,在一个示例中,士兵A控制的虚拟对象a在虚拟环境中进行对应的活动。
当虚拟对象a在虚拟环境中的身体姿态是下蹲状态时,如图7所示,虚拟对象a的视角是仰视,虚拟环境画面是天空。士兵A在军事仿真程序的用户界面上的目标区域进行双指竖向滑动操作,可选地,目标区域包括第一目标区域和第二目标区域,士兵A的双手手指分别在第一目标区域和第二目标区域,示意性的,士兵A的左手手指在第一目标区域,右手手指在第二目标区域。士兵A的左手手指和右手手指同时在各自的目标区域中向上滑动(如图7中的箭头所示),控制虚拟对象a在虚拟环境起身。可选地,当虚拟对象a在虚拟环境中的身体姿态是站立状态时,士兵A在目标区域进行双指竖向滑动操作(滑动方向与图7所示的方向相反),则控制虚拟对象a在虚拟环境中进行下蹲。
示意性的,虚拟对象a使用的虚拟道具是狙击枪,该狙击枪是自动启动状态。士兵A在军事仿真程序的用户界面上的目标区域131进行两次点击操作(如图2所示),当第一次点击操作和第二次点击操作之间的时间间隔小于时间间隔阈值时,军事仿真程序判断两次点击操作是双击操作,该双击操作在目标区域131中,则士兵A可通过双击操作开启狙击枪对应的瞄准镜。可选地,在开启瞄准镜后,士兵A可通过在目标区域双击操作关闭狙击枪对应的瞄准镜。
示意性的,虚拟对象a使用的虚拟道具是冲锋枪,该狙击枪是手动启动状态。士兵A在军事仿真程序的用户界面上的目标区域131进行双击长按操作(如图6所示),军事仿真程序判断士兵A的两次点击操作是否是双击操作,若两次点击操作是双击操作,军事仿真程序继续判断士兵A的按压操作是否是长按操作,当士兵A的按压操作的时长大于持续时长阈值时,判断在目标区域131上的快捷操作是双击长按操作。军事仿真程序根据该双击长按操作,控制虚拟对 象a启动冲锋枪(即使用冲锋枪开火,如图9所示)。可选地,当士兵A停止长按操作时,军事仿真程序控制虚拟对象a关闭冲锋枪的开火功能,长按操作的时长是冲锋枪的开火时长。
示意性的,虚拟对象a使用的虚拟道具是炸弹,该炸弹是虚拟对象a拥有的。士兵A在军事仿真程序的用户界面上的目标区域进行双指横向滑动操作,可选地,目标区域包括第一目标区域和第二目标区域,士兵A的双手手指分别在第一目标区域和第二目标区域,示意性的,士兵A的左手手指在第一目标区域,右手手指在第二目标区域。士兵A的左手手指和右手手指同时在各自的目标区域中向右滑动(如图10中的箭头所示),控制虚拟对象a投掷炸弹。可选地,当虚拟对象a在虚拟环境中的建筑物内,士兵A还可通过双指横向滑动操作控制虚拟对象a打开门窗;或者,当虚拟环境中存在有虚拟物品时,士兵A还可通过双指横向滑动操作控制虚拟对象a拾取虚拟环境中的虚拟物品。
综上所述,本申请实施例中,将上述控制虚拟对象的方法应用在军事仿真程序中,能够提高作战效率,有益于增强士兵之间的配合程度。
以下为本申请的装置实施例,对于装置实施例中未详细描述的细节,可以结合参考上述方法实施例中相应的记载,本文不再赘述。
图15示出了本申请的一个示例性实施例提供的控制虚拟对象的装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为终端的全部或一部分,该装置包括:显示模块1510、接收模块1520、控制模块1530和获取模块1540,其中,显示模块1510和接收模块1530是可选的模块。
显示模块1510,用于显示用户界面,用户界面包括虚拟环境画面和交互面板区,虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面;
接收模块1520,用于接收在用户界面上的目标区域上的快捷操作,目标区域包括属于虚拟环境画面且不属于交互面板区的区域;
控制模块1530,用于根据快捷操作控制虚拟对象在虚拟环境中进行对应的活动。
在一个可选的实施例中,所述控制模块1530,还用于当虚拟对象的身体姿态满足第一条件时,根据快捷操作控制虚拟对象在虚拟环境中调整身体姿态。
在一个可选的实施例中,第一条件包括虚拟对象的身体姿态是下蹲状态; 快捷操作包括双指竖向滑动操作;所述接收模块1520,还用于接收目标区域上的双指竖向滑动操作;所述控制模块1530,还用于当虚拟对象的身体姿态是下蹲状态时,根据双指竖向滑动操作控制虚拟对象在虚拟环境中由下蹲状态切换为起身状态。
在一个可选的实施例中,目标区域包括第一目标区域和第二目标区域;获取模块1540,用于获取第一接触点在第一目标区域的第一起始位置坐标和第二接触点在第二目标区域的第二起始位置坐标;所述获取模块1540,还用于当第一接触点和第二接触点停止滑动时,获取第一接触点在第一目标区域的第一终止位置坐标和第二接触点在第二目标区域的第二终止位置坐标;所述接收模块1520,还用于当第一接触点的纵坐标位移大于纵坐标位移阈值,且,第二接触点的纵坐标位移大于纵坐标位移阈值时,确定目标区域上接收到双指竖向滑动操作。
在一个可选的实施例中,所述控制模块1530,还用于当虚拟道具的使用状态满足第二条件时,根据快捷操作控制虚拟对象在虚拟环境中开启虚拟道具对应的配件;或,所述控制模块1530,还用于当虚拟道具的使用状态满足第三条件时,根据快捷操作控制虚拟对象在虚拟环境中启动虚拟道具;或,所述控制模块1530,还用于当虚拟道具的使用状态满足第四条件时,根据快捷操作控制虚拟对象在虚拟环境中投掷虚拟道具。
在一个可选的实施例中,第二条件包括虚拟道具是自动启动状态;快捷操作还包括双击操作;所述接收模块1520,还用于接收目标区域上的双击操作;所述控制模块1530,还用于当第一虚拟道具是自动启动状态时,根据双击操作控制虚拟对象在虚拟环境中开启第一虚拟道具对应的瞄准镜。
在一个可选的实施例中,所述获取模块1540,还用于获取在目标区域上的第一次点击操作的时间和第二次点击操作的时间;所述接收模块1510,还用于当第一次点击操作和第二次点击操作之间的时间间隔小于时间间隔阈值时,确定目标区域上接收到双击操作。
在一个可选的实施例中,第三条件包括虚拟道具是手动启动状态;快捷操作还包括双击长按操作;所述接收模块1510,还用于接收目标区域上的双击长按操作;所述控制模块1530,还用于当第二虚拟道具是手动开启状态时,根据 双击长按操作控制虚拟对象在虚拟环境中启动第二虚拟道具。
在一个可选的实施例中,所述接收模块1510,还用于在接收目标区域上的双击操作后,接收在目标区域上的按压操作;当按压操作的时长大于持续时长阈值时,确定目标区域上接收到双击长按操作。
在一个可选的实施例中,所述控制模块1530,还用于在目标区域上的双击长按操作停止时,控制虚拟对象关闭虚拟道具。
在一个可选的实施例中,第四条件包括虚拟对象拥有虚拟道具;快捷操作还包括双指横向滑动操作;所述接收模块1510,还用于接收目标区域上的双指横向滑动操作;所述控制模块1530,还用于当虚拟对象拥有第三虚拟道具时,根据双指横向滑动操作控制虚拟对象在虚拟环境中投掷第三虚拟道具。
在一个可选的实施例中,目标区域包括第一目标区域和第二目标区域;获取模块1540,还用于获取第一接触点在第一目标区域的第一起始位置坐标和第二接触点在第二目标区域的第二起始位置坐标;所述获取模块1540,还用于当第一接触点和第二接触点停止滑动时,获取第一接触点在第一目标区域的第一终止位置坐标和第二接触点在第二目标区域的第二终止位置坐标;所述接收模块1510,还用于当第一接触点的横坐标位移大于横坐标位移阈值,且,第二接触点的横坐标位移大于横坐标位移阈值时,确定目标区域上接收到双指横向滑动操作。
请参考图16,其示出了本申请一个示例性实施例提供的计算机设备1600的结构框图。该计算机设备1600可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器。计算机设备1600还可能被称为用户设备、便携式终端等其他名称。
通常,计算机设备1600包括有:处理器1601和存储器1602。
处理器1601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。 处理器1601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1601所执行以实现本申请中提供的控制虚拟对象的方法。
在一些实施例中,计算机设备1600还可选包括有:外围设备接口1603和至少一个外围设备。具体地,外围设备包括:射频电路1604、触摸显示屏1605、摄像头1606、音频电路1607、定位组件1608和电源1609中的至少一种。
外围设备接口1603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1601和存储器1602。在一些实施例中,处理器1601、存储器1602和外围设备接口1603被集成在同一芯片或电路板上;在一些其他实施例中,处理器1601、存储器1602和外围设备接口1603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1604可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真) 网络。在一些实施例中,射频电路1604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
触摸显示屏1605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。触摸显示屏1605还具有采集在触摸显示屏1605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1601进行处理。触摸显示屏1605用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,触摸显示屏1605可以为一个,设置计算机设备1600的前面板;在另一些实施例中,触摸显示屏1605可以为至少两个,分别设置在计算机设备1600的不同表面或呈折叠设计;在再一些实施例中,触摸显示屏1605可以是柔性显示屏,设置在计算机设备1600的弯曲表面上或折叠面上。甚至,触摸显示屏1605还可以设置成非矩形的不规则图形,也即异形屏。触摸显示屏1605可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1606用于采集图像或视频。可选地,摄像头组件1606包括前置摄像头和后置摄像头。通常,前置摄像头用于实现视频通话或自拍,后置摄像头用于实现照片或视频的拍摄。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能,主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能。在一些实施例中,摄像头组件1606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1607用于提供用户和计算机设备1600之间的音频接口。音频电路1607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1601进行处理,或者输入至射频电路1604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在计算机设备1600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1601或射频电路1604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声 器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1607还可以包括耳机插孔。
定位组件1608用于定位计算机设备1600的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1608可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1609用于为计算机设备1600中的各个组件进行供电。电源1609可以是交流电、直流电、一次性电池或可充电电池。当电源1609包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,计算机设备1600还包括有一个或多个传感器1610。该一个或多个传感器1610包括但不限于:加速度传感器1611、陀螺仪传感器1612、压力传感器1613、指纹传感器1614、光学传感器1615以及接近传感器1616。
加速度传感器1611可以检测以计算机设备1600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1611可以用于检测重力加速度在三个坐标轴上的分量。处理器1601可以根据加速度传感器1611采集的重力加速度信号,控制触摸显示屏1605以横向视图或纵向视图进行用户界面的显示。加速度传感器1611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1612可以检测计算机设备1600的机体方向及转动角度,陀螺仪传感器1612可以与加速度传感器1611协同采集用户对计算机设备1600的3D动作。处理器1601根据陀螺仪传感器1612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1613可以设置在计算机设备1600的侧边框和/或触摸显示屏1605的下层。当压力传感器1613设置在计算机设备1600的侧边框时,可以检测用户对计算机设备1600的握持信号,根据该握持信号进行左右手识别或快捷操作。当压力传感器1613设置在触摸显示屏1605的下层时,可以根据用户对触摸 显示屏1605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1614用于采集用户的指纹,以根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1601授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1614可以被设置计算机设备1600的正面、背面或侧面。当计算机设备1600上设置有物理按键或厂商Logo时,指纹传感器1614可以与物理按键或厂商Logo集成在一起。
光学传感器1615用于采集环境光强度。在一个实施例中,处理器1601可以根据光学传感器1615采集的环境光强度,控制触摸显示屏1605的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1605的显示亮度;当环境光强度较低时,调低触摸显示屏1605的显示亮度。在另一个实施例中,处理器1601还可以根据光学传感器1615采集的环境光强度,动态调整摄像头组件1606的拍摄参数。
接近传感器1616,也称距离传感器,通常设置在计算机设备1600的正面。接近传感器1616用于采集用户与计算机设备1600的正面之间的距离。在一个实施例中,当接近传感器1616检测到用户与计算机设备1600的正面之间的距离逐渐变小时,由处理器1601控制触摸显示屏1605从亮屏状态切换为息屏状态;当接近传感器1616检测到用户与计算机设备1600的正面之间的距离逐渐变大时,由处理器1601控制触摸显示屏1605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图16中示出的结构并不构成对计算机设备1600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请还提供一种计算机设备,该计算机设备包括:处理器和存储器,该存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述各方法实施例提供的控制虚拟对象的方法。
另外,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述实施例提供的控制虚拟对象的方法。
本申请实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的控制虚拟对象的方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种控制虚拟对象的方法,所述方法由终端执行,所述方法包括:
    显示用户界面,所述用户界面包括虚拟环境画面和交互面板区,所述虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面;
    接收在所述用户界面上的目标区域上的快捷操作,所述目标区域包括属于所述虚拟环境画面且不属于所述交互面板区的区域;
    根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动。
  2. 根据权利要求1所述的方法,所述根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动,包括:
    当所述虚拟对象的身体姿态满足第一条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中调整所述身体姿态。
  3. 根据权利要求2所述的方法,所述第一条件包括所述虚拟对象的身体姿态是下蹲状态;所述快捷操作包括双指竖向滑动操作;
    所述当所述虚拟对象的身体姿态满足第一条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中调整所述身体姿态,包括:
    接收所述目标区域上的所述双指竖向滑动操作;
    当所述虚拟对象的身体姿态是下蹲状态时,根据所述双指竖向滑动操作控制所述虚拟对象在所述虚拟环境中由下蹲状态切换为起身状态。
  4. 根据权利要求3所述的方法,所述目标区域包括第一目标区域和第二目标区域;
    所述接收所述目标区域上的所述双指竖向滑动操作,包括:
    获取第一接触点在所述第一目标区域的第一起始位置坐标和第二接触点在所述第二目标区域的第二起始位置坐标;
    当所述第一接触点和所述第二接触点停止滑动时,获取所述第一接触点在所述第一目标区域的第一终止位置坐标和所述第二接触点在所述第二目标区域的第二终止位置坐标;
    当所述第一接触点的纵坐标位移大于纵坐标位移阈值,且,所述第二接触点的纵坐标位移大于所述纵坐标位移阈值时,确定所述目标区域上接收到所述双指竖向滑动操作。
  5. 根据权利要求1所述的方法,所述根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动,包括:
    当所述虚拟道具的使用状态满足第二条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中开启所述虚拟道具对应的配件;
    或,
    当所述虚拟道具的使用状态满足第三条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中启动所述虚拟道具;
    或,
    当所述虚拟道具的使用状态满足第四条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中投掷所述虚拟道具。
  6. 根据权利要求5所述的方法,所述第二条件包括虚拟道具是自动启动状态;所述快捷操作还包括双击操作;
    所述当所述虚拟道具的使用状态满足第二条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中开启所述虚拟道具对应的配件,包括:
    接收所述目标区域上的所述双击操作;
    当第一虚拟道具是所述自动启动状态时,根据所述双击操作控制所述虚拟对象在所述虚拟环境中开启所述第一虚拟道具对应的瞄准镜。
  7. 根据权利要求6所述的方法,所述接收所述目标区域上的所述双击操作,包括:
    获取在所述目标区域上的第一次点击操作的时间和第二次点击操作的时间;
    当所述第一次点击操作和所述第二次点击操作之间的时间间隔小于时间间隔阈值时,确定所述目标区域上接收到所述双击操作。
  8. 根据权利要求5所述的方法,所述第三条件包括虚拟道具是手动启动状态;所述快捷操作还包括双击长按操作;
    所述当所述虚拟道具的使用状态满足第三条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中启动所述虚拟道具,包括:
    接收所述目标区域上的所述双击长按操作;
    当第二虚拟道具是手动开启状态时,根据所述双击长按操作控制所述虚拟 对象在所述虚拟环境中启动所述第二虚拟道具。
  9. 根据权利要求8所述的方法,所述接收所述目标区域上的所述双击长按操作,包括:
    在接收所述目标区域上的所述双击操作后,接收在所述目标区域上的按压操作;
    当所述按压操作的时长大于持续时长阈值时,确定所述目标区域上接收到所述双击长按操作。
  10. 根据权利要求8或9所述的方法,所述当所述虚拟道具是手动开启状态时,根据所述双击长按操作控制所述虚拟对象在所述虚拟环境中启动所述虚拟道具之后,还包括:
    在所述目标区域上的双击长按操作停止时,控制所述虚拟对象关闭所述虚拟道具。
  11. 根据权利要求5所述的方法,所述第四条件包括虚拟对象拥有所述虚拟道具;所述快捷操作还包括双指横向滑动操作;
    所述当所述虚拟道具的使用状态满足第四条件时,根据所述快捷操作控制所述虚拟对象在所述虚拟环境中投掷所述虚拟道具,包括:
    接收所述目标区域上的所述双指横向滑动操作;
    当所述虚拟对象拥有第三虚拟道具时,根据所述双指横向滑动操作控制所述虚拟对象在所述虚拟环境中投掷所述第三虚拟道具。
  12. 根据权利要求11所述的方法,所述目标区域包括第一目标区域和第二目标区域;
    所述接收所述目标区域上的所述双指横向滑动操作,包括:
    获取第一接触点在所述第一目标区域的第一起始位置坐标和第二接触点在所述第二目标区域的第二起始位置坐标;
    当所述第一接触点和所述第二接触点停止滑动时,获取所述第一接触点在所述第一目标区域的第一终止位置坐标和所述第二接触点在所述第二目标区域的第二终止位置坐标;
    当所述第一接触点的横坐标位移大于横坐标位移阈值,且,所述第二接触点的横坐标位移大于所述横坐标位移阈值时,确定所述目标区域上接收到所述 双指横向滑动操作。
  13. 一种控制虚拟对象的装置,所述装置包括:
    显示模块,用于显示用户界面,所述用户界面包括虚拟环境画面和交互面板区,所述虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面;
    接收模块,用于接收在所述用户界面上的目标区域上的快捷操作,所述目标区域包括属于所述虚拟环境画面且不属于所述交互面板区的区域;
    控制模块,用于根据所述快捷操作控制所述虚拟对象在所述虚拟环境中进行对应的活动。
  14. 一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行,以实现如权利要求1至12任一项所述的控制虚拟对象的方法。
  15. 一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行权利要求1至12任一项所述的控制虚拟对象的方法。
  16. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1至12任一项所述的控制虚拟对象的方法。
PCT/CN2020/103006 2019-08-23 2020-07-20 控制虚拟对象的方法和相关装置 WO2021036577A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021555308A JP2022525172A (ja) 2019-08-23 2020-07-20 仮想オブジェクトの制御方法、装置、コンピュータ機器及びプログラム
SG11202109543UA SG11202109543UA (en) 2019-08-23 2020-07-20 Method for controlling virtual object, and related apparatus
KR1020217029447A KR102619439B1 (ko) 2019-08-23 2020-07-20 가상 객체를 제어하는 방법 및 관련 장치
US17/459,037 US20210387087A1 (en) 2019-08-23 2021-08-27 Method for controlling virtual object and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910784863.5 2019-08-23
CN201910784863.5A CN110507993B (zh) 2019-08-23 2019-08-23 控制虚拟对象的方法、装置、设备及介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/459,037 Continuation US20210387087A1 (en) 2019-08-23 2021-08-27 Method for controlling virtual object and related apparatus

Publications (1)

Publication Number Publication Date
WO2021036577A1 true WO2021036577A1 (zh) 2021-03-04

Family

ID=68626588

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103006 WO2021036577A1 (zh) 2019-08-23 2020-07-20 控制虚拟对象的方法和相关装置

Country Status (6)

Country Link
US (1) US20210387087A1 (zh)
JP (1) JP2022525172A (zh)
KR (1) KR102619439B1 (zh)
CN (1) CN110507993B (zh)
SG (1) SG11202109543UA (zh)
WO (1) WO2021036577A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110507993B (zh) * 2019-08-23 2020-12-11 腾讯科技(深圳)有限公司 控制虚拟对象的方法、装置、设备及介质
CN111135568B (zh) * 2019-12-17 2022-01-04 腾讯科技(深圳)有限公司 虚拟道具的控制方法和装置、存储介质及电子装置
CN111298440A (zh) * 2020-01-20 2020-06-19 腾讯科技(深圳)有限公司 虚拟环境中的虚拟角色控制方法、装置、设备及介质
CN111298435B (zh) * 2020-02-12 2024-04-12 网易(杭州)网络有限公司 Vr游戏的视野控制方法、vr显示终端、设备及介质
CN111389000A (zh) * 2020-03-17 2020-07-10 腾讯科技(深圳)有限公司 虚拟道具的使用方法、装置、设备及介质
CN111589142B (zh) * 2020-05-15 2023-03-21 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及介质
CN112587924B (zh) * 2021-01-08 2024-07-23 网易(杭州)网络有限公司 游戏ai的躲避方法、装置、存储介质及计算机设备
CN112807679A (zh) * 2021-02-01 2021-05-18 网易(杭州)网络有限公司 游戏道具的使用方法、装置、设备及存储介质
CN112807680A (zh) * 2021-02-09 2021-05-18 腾讯科技(深圳)有限公司 虚拟场景中虚拟对象的控制方法、装置、设备及存储介质
US20220317782A1 (en) * 2021-04-01 2022-10-06 Universal City Studios Llc Interactive environment with portable devices
CN113546417B (zh) * 2021-04-28 2024-07-26 网易(杭州)网络有限公司 一种信息处理方法、装置、电子设备和存储介质
CN113655927B (zh) * 2021-08-24 2024-04-26 亮风台(上海)信息科技有限公司 一种界面交互方法与设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103252087A (zh) * 2012-02-20 2013-08-21 富立业资讯有限公司 具有触控面板媒体的游戏控制方法及该游戏媒体
JP2016093361A (ja) * 2014-11-14 2016-05-26 株式会社コロプラ ゲームプログラム
CN106502563A (zh) * 2016-10-19 2017-03-15 北京蜜柚时尚科技有限公司 一种游戏控制方法及装置
CN108363531A (zh) * 2018-01-17 2018-08-03 网易(杭州)网络有限公司 一种游戏中的交互方法及装置
CN108926840A (zh) * 2018-06-27 2018-12-04 努比亚技术有限公司 游戏控制方法、移动终端及计算机可读存储介质
CN109126129A (zh) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 在虚拟环境中对虚拟物品进行拾取的方法、装置及终端
CN110075522A (zh) * 2019-06-04 2019-08-02 网易(杭州)网络有限公司 射击游戏中虚拟武器的控制方法、装置及终端
CN110507993A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 控制虚拟对象的方法、装置、设备及介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6057845A (en) * 1997-11-14 2000-05-02 Sensiva, Inc. System, method, and apparatus for generation and recognizing universal commands
US8118653B2 (en) * 2008-06-16 2012-02-21 Microsoft Corporation Taking cover in a simulated environment
WO2014163220A1 (ko) * 2013-04-05 2014-10-09 그리 가부시키가이샤 온라인 슈팅 게임 제공 장치 및 방법
US11465040B2 (en) * 2013-12-11 2022-10-11 Activision Publishing, Inc. System and method for playing video games on touchscreen-based devices
CN104436657B (zh) * 2014-12-22 2018-11-13 青岛烈焰畅游网络技术有限公司 游戏控制方法、装置以及电子设备
CN105582670B (zh) * 2015-12-17 2019-04-30 网易(杭州)网络有限公司 瞄准射击控制方法及装置
JP6308643B1 (ja) * 2017-03-24 2018-04-11 望月 玲於奈 姿勢算出プログラム、姿勢情報を用いたプログラム
JP6800464B2 (ja) * 2017-05-11 2020-12-16 株式会社アルヴィオン プログラム及び情報処理装置
CN107661630A (zh) * 2017-08-28 2018-02-06 网易(杭州)网络有限公司 一种射击游戏的控制方法及装置、存储介质、处理器、终端
CN107754308A (zh) * 2017-09-28 2018-03-06 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN108579086B (zh) * 2018-03-27 2019-11-08 腾讯科技(深圳)有限公司 对象的处理方法、装置、存储介质和电子装置
CN108744509A (zh) * 2018-05-30 2018-11-06 努比亚技术有限公司 一种游戏操作控制方法、移动终端及计算机可读存储介质
CN108888952A (zh) * 2018-06-19 2018-11-27 腾讯科技(深圳)有限公司 虚拟道具显示方法、装置、电子设备及存储介质
CN108970112A (zh) * 2018-07-05 2018-12-11 腾讯科技(深圳)有限公司 姿势的调整方法和装置、存储介质、电子装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103252087A (zh) * 2012-02-20 2013-08-21 富立业资讯有限公司 具有触控面板媒体的游戏控制方法及该游戏媒体
JP2016093361A (ja) * 2014-11-14 2016-05-26 株式会社コロプラ ゲームプログラム
CN106502563A (zh) * 2016-10-19 2017-03-15 北京蜜柚时尚科技有限公司 一种游戏控制方法及装置
CN108363531A (zh) * 2018-01-17 2018-08-03 网易(杭州)网络有限公司 一种游戏中的交互方法及装置
CN108926840A (zh) * 2018-06-27 2018-12-04 努比亚技术有限公司 游戏控制方法、移动终端及计算机可读存储介质
CN109126129A (zh) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 在虚拟环境中对虚拟物品进行拾取的方法、装置及终端
CN110075522A (zh) * 2019-06-04 2019-08-02 网易(杭州)网络有限公司 射击游戏中虚拟武器的控制方法、装置及终端
CN110507993A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 控制虚拟对象的方法、装置、设备及介质

Also Published As

Publication number Publication date
SG11202109543UA (en) 2021-09-29
US20210387087A1 (en) 2021-12-16
KR102619439B1 (ko) 2023-12-28
KR20210125568A (ko) 2021-10-18
CN110507993B (zh) 2020-12-11
CN110507993A (zh) 2019-11-29
JP2022525172A (ja) 2022-05-11

Similar Documents

Publication Publication Date Title
WO2021036577A1 (zh) 控制虚拟对象的方法和相关装置
WO2020253832A1 (zh) 控制虚拟对象对虚拟物品进行标记的方法、装置及介质
CN110694261B (zh) 控制虚拟对象进行攻击的方法、终端及存储介质
CN110413171B (zh) 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN108434736B (zh) 虚拟环境对战中的装备显示方法、装置、设备及存储介质
WO2020244415A1 (zh) 控制虚拟对象对虚拟物品进行丢弃的方法、装置及介质
WO2021143259A1 (zh) 虚拟对象的控制方法、装置、设备及可读存储介质
CN110465098B (zh) 控制虚拟对象使用虚拟道具的方法、装置、设备及介质
CN113398571B (zh) 虚拟道具的切换方法、装置、终端及存储介质
WO2021031765A1 (zh) 虚拟环境中瞄准镜的应用方法和相关装置
CN111921190B (zh) 虚拟对象的道具装备方法、装置、终端及存储介质
CN110639205B (zh) 操作响应方法、装置、存储介质及终端
WO2021143253A1 (zh) 虚拟环境中虚拟道具的操作方法、装置、设备及可读介质
CN113713383A (zh) 投掷道具控制方法、装置、计算机设备及存储介质
CN112354181B (zh) 开镜画面展示方法、装置、计算机设备及存储介质
CN112402969B (zh) 虚拟场景中虚拟对象控制方法、装置、设备及存储介质
CN112044066B (zh) 界面显示方法、装置、设备及可读存储介质
CN112138392A (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN113730916A (zh) 基于虚拟环境中的资源加载方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20857852

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021555308

Country of ref document: JP

Kind code of ref document: A

Ref document number: 20217029447

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20857852

Country of ref document: EP

Kind code of ref document: A1