WO2021213021A1 - 基于虚拟场景的操作控件显示方法及装置 - Google Patents

基于虚拟场景的操作控件显示方法及装置 Download PDF

Info

Publication number
WO2021213021A1
WO2021213021A1 PCT/CN2021/079516 CN2021079516W WO2021213021A1 WO 2021213021 A1 WO2021213021 A1 WO 2021213021A1 CN 2021079516 W CN2021079516 W CN 2021079516W WO 2021213021 A1 WO2021213021 A1 WO 2021213021A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
target
area
target virtual
operation control
Prior art date
Application number
PCT/CN2021/079516
Other languages
English (en)
French (fr)
Inventor
万钰林
胡勋
翁建苗
粟山东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to SG11202111645SA priority Critical patent/SG11202111645SA/en
Priority to JP2021565788A priority patent/JP7331137B2/ja
Priority to KR1020217035221A priority patent/KR20210143309A/ko
Priority to CA3132506A priority patent/CA3132506A1/en
Priority to EP21786075.8A priority patent/EP3943174A4/en
Priority to AU2021240131A priority patent/AU2021240131B2/en
Priority to US17/501,989 priority patent/US20220032186A1/en
Publication of WO2021213021A1 publication Critical patent/WO2021213021A1/zh
Priority to JP2023130356A priority patent/JP2023159212A/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • This application relates to the field of computer technology, and in particular to a method, device, device, and storage medium for displaying operation controls based on a virtual scene.
  • role-playing games have gradually become an extremely important category of mobile games.
  • players can control their virtual objects to move in the virtual scene and interact with other virtual elements in the virtual scene.
  • a virtual scene can include multiple types of virtual elements, and players need to apply different operation controls when interacting with different types of virtual elements.
  • the embodiments of the present application provide a virtual scene-based operation control display method, device, equipment, and storage medium.
  • the technical solution is as follows:
  • a method for displaying operation controls based on a virtual scene which is applied to a computer device, and the method includes:
  • a target operation control corresponding to the element type of the target virtual element is displayed, and the target operation control is used to control the target virtual object to interact with the target virtual element.
  • an operating control display device based on a virtual scene includes:
  • An acquiring module used to acquire position information of a target virtual object in a virtual scene, where the target virtual object is a virtual object controlled by the terminal;
  • the determining module is configured to determine the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene;
  • the display module is used to display the target operation control corresponding to the element type of the target virtual element in the control display area in the virtual scene, and the target operation control is used to control the target virtual object to interact with the target virtual element.
  • the display module is used to:
  • the target operation control is displayed at the display position of any control.
  • the display module is used to:
  • the acquisition module is used to:
  • the position information of the target virtual object is acquired in real time.
  • the acquisition module is used to:
  • a step of determining the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene is performed.
  • a computer device in one aspect, includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories. Multiple processors are loaded and executed to implement the virtual scene-based operation control display method.
  • a computer-readable storage medium is provided, and at least one piece of program code is stored in the computer-readable storage medium, and the at least one piece of program code is loaded and executed by a processor to implement the virtual scene-based operation control display method.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application;
  • FIG. 2 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an operation interface provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of an operation control switching process provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a control display area provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application
  • FIG. 8 is a schematic diagram of an operation control switching process provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a control display area provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an operation control display device based on a virtual scene provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • Virtual scene It is the virtual scene displayed (or provided) when the application is running on the terminal.
  • the virtual scene may be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a purely fictitious virtual environment.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, which is not limited in this application.
  • the virtual scene may include the sky, land, sea, etc.
  • the land may include environmental elements such as deserts and cities, and the user can control the virtual objects to move in the virtual scene.
  • Virtual object refers to the movable object in the virtual scene.
  • the movable object may be a virtual character, a virtual animal, an animation character, and the like.
  • the virtual object may be a virtual avatar used to represent the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object may be a character controlled by operating on the client, or it may be an artificial intelligence (AI) set in a virtual environment battle through training, or may be set in a virtual scene battle Non-Player Character (NPC).
  • AI artificial intelligence
  • NPC Non-Player Character
  • the virtual object is a virtual character competing in a virtual scene.
  • the number of virtual objects in the virtual scene battle may be preset, or may be dynamically determined according to the number of clients participating in the battle, which is not limited in the embodiment of the present application.
  • MOBA Multiplayer Online Battle Arena, multiplayer online tactical competition
  • a MOBA game can divide users into at least two hostile camps, and different virtual teams belonging to at least two hostile camps occupy their respective map areas, and compete with a certain victory condition as the goal.
  • each virtual team includes one or more virtual objects.
  • the victory conditions include but are not limited to: occupying a stronghold or destroying the enemy camp’s stronghold, killing the virtual object of the enemy camp, ensuring one’s own survival in a specified scene and time, grabbing a certain resource, and surpassing the opponent’s score within a specified time
  • MOBA games can be played in rounds, and the map for each round of tactical competition can be the same or different.
  • the duration of a MOBA game is from the moment the game starts to the moment the victory condition is fulfilled.
  • FIG. 1 is a schematic diagram of an implementation environment of a virtual scene-based operation control display method provided by an embodiment of the present application.
  • the implementation environment may include: a first terminal 110, a server 140, and a second terminal 160.
  • the first terminal 110 installs and runs an application program that supports the display of virtual scenes and virtual objects.
  • the application can be virtual reality applications, three-dimensional map programs, military simulation programs, role-playing games (RPG), multiplayer online battle arena games (MOBA), multiplayer gun battles Any kind of survival game.
  • the first terminal 110 is a terminal used by the first user.
  • the first user uses the first terminal 110 to operate the first virtual object in the virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking up, shooting, attacking, and throwing.
  • the first virtual object is a first virtual character, such as a simulated character or an animation character.
  • the first terminal 110 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 140 is used to provide background services for applications supporting virtual scenes.
  • the server 140 is responsible for the main calculation work, and the first terminal 110 and the second terminal 160 are responsible for the secondary calculation work; or the server 140 is responsible for the secondary calculation work, and the first terminal 110 and the second terminal 160 are responsible for the main calculation work;
  • the server 140, the first terminal 110, and the second terminal 160 adopt a distributed computing architecture to perform collaborative computing.
  • the second terminal 160 installs and runs an application program that supports the display of virtual scenes and virtual objects.
  • the application can be virtual reality applications, three-dimensional map programs, military simulation programs, role-playing games (RPG), multiplayer online battle arena games (MOBA), multiplayer gun battles Any kind of survival game.
  • the second terminal 160 is a terminal used by the second user.
  • the second user uses the second terminal 160 to operate the second virtual object in the virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking up, shooting, attacking, and throwing.
  • the second virtual object is a second virtual character, such as a simulated character or an animation character.
  • the second terminal 160 is connected to the server 140 through a wireless network or a wired network.
  • the first virtual object controlled by the first terminal 110 and the second virtual object controlled by the second terminal 160 are in the same virtual scene.
  • the first virtual object can interact with the second virtual object in the virtual scene.
  • the first virtual object and the second virtual object may be in a hostile relationship.
  • the first virtual object and the second virtual object may belong to different groups, and different virtual objects in the hostile relationship may be applied.
  • the skills attack each other, thereby interacting in a battle mode, and the performance effects triggered by the skills are displayed in the first terminal 110 and the second terminal 160.
  • first virtual object and the second virtual object may be teammates.
  • first virtual object and the second virtual object may belong to the same group, have a friend relationship, or have temporary communication permissions.
  • the applications installed on the first terminal 110 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms.
  • the first terminal 110 may generally refer to one of multiple terminals
  • the second terminal 160 may generally refer to one of multiple terminals. This embodiment only uses the first terminal 110 and the second terminal 160 as examples.
  • the device types of the first terminal 110 and the second terminal 160 are the same or different. The device types include: smart phones, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compress standard audio layer 3) ) Player, at least one of MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Experts compresses standard audio layer 4) player, laptop portable computer and desktop computer.
  • the first terminal 110 and the second terminal 160 may be smart phones or other handheld portable game devices.
  • the terminal includes a smart phone as an example.
  • the number of the aforementioned terminals may be more or less. For example, there may be only one terminal, or there may be dozens or hundreds of terminals, or more.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • the virtual scene-based operation control display method provided by the embodiments of the present application can be applied to various types of application programs, and user operations are simplified by automatically adjusting the operation controls displayed on the operation interface.
  • the user can operate the target virtual object to move in the virtual scene through the terminal, or to interact with the virtual elements in the virtual scene.
  • the terminal automatically adjusts the currently displayed virtual control based on the change of the virtual element interacting with the target virtual object, without the user's manual switching, which reduces the difficulty of operation.
  • Fig. 2 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application.
  • the method can be applied to a computer device, and in some embodiments, the computer device is any terminal in the foregoing implementation environment.
  • the method for displaying the operation control based on the virtual scene is introduced with reference to FIG. 2.
  • the terminal acquires location information of a target virtual object in a virtual scene, where the target virtual object is a virtual object controlled by the terminal.
  • the target virtual object is a virtual object controlled by a user who uses the terminal.
  • the terminal can display a virtual scene corresponding to the game, and the virtual scene may include multiple types of virtual elements.
  • the virtual elements may include virtual objects, virtual buildings, virtual props, etc.
  • Each virtual element has its own shape and volume in the virtual scene, and occupies a part of the space in the virtual scene.
  • the virtual element may be a defensive tower, minions, monsters, fruits, etc. in the virtual scene.
  • the embodiment of the present application does not limit the specific type and number of the virtual elements.
  • the terminal can also display at least one operation control in the control display area in the virtual scene. The at least one operation control can be used to control the target virtual object to interact with various types of virtual elements.
  • the operation control includes Pawn keys for attacking minions also include tower keys for attacking defensive towers, monster keys for attacking monsters in the wild, and fruit keys for attacking the fruits in the virtual scene.
  • the fruit includes an explosion fruit and a recovery fruit.
  • the explosion fruit has a range of action. When the target virtual object is located in the range and the target virtual object launches an attack on the explosive fruit, the explosive fruit can explode, and the target virtual object "Blow up" to the position opposite to the explosion fruit.
  • the recovery fruit can be hashed into multiple sub-recovery fruits.
  • the health value can be recovered.
  • the health value is also referred to as the health value.
  • FIG. 3 is a schematic diagram of an operation interface provided by an embodiment of the present application.
  • the operation interface includes a virtual scene display area 301 and an operation control display area 302.
  • the operation control display area 302 displays a plurality of operation controls 303.
  • the user can control the target virtual object to move in the virtual scene through operations such as dragging the target virtual object, and the terminal can obtain the position information of the target virtual object in the virtual scene in real time.
  • the terminal can represent the location information in the form of coordinates. It should be noted that the embodiment of the present application does not limit the specific method for the terminal to obtain the location information.
  • the terminal determines the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene.
  • the terminal can determine the target based on the location information of the target virtual object, or based on the virtual elements contained in the current virtual scene, or based on the location information and the virtual elements contained in the current virtual scene With which virtual element the virtual object can interact, that is, the terminal determines the target virtual element corresponding to the target virtual object.
  • the target virtual element is also an interactive element that can interact with the target virtual object.
  • the target Virtual elements are also called interactive elements.
  • the terminal can determine the element type of the target virtual element. For example, when the target virtual object is located in a wild area and the virtual element contained in the wild area is a monster, the target virtual element corresponding to the target virtual object is a monster.
  • the monster is also called a wild monster.
  • the target virtual object is located in an area other than the wild area, and the virtual elements appearing in the other area include pawns but not monsters, then the target virtual element corresponding to the target virtual object is a pawn.
  • the control display area of the terminal in the virtual scene displays a target operation control corresponding to the element type of the target virtual element, and the target operation control is used to control the target virtual object to interact with the target virtual element.
  • control display area is an area for displaying operation controls.
  • one or more operation controls are displayed in the control display area.
  • the target virtual object when the target virtual object interacts with different types of virtual elements, it can be realized through different operation controls.
  • the operation control used is the monster button; when controlling the target virtual object to attack the minion, the operation control used is the soldier button; when controlling the target virtual object to attack the defense tower, the operation control used is the pawn button.
  • the tower key when controlling the target virtual object to attack a monster, the operation control used is the monster button; when controlling the target virtual object to attack the minion, the operation control used is the soldier button; when controlling the target virtual object to attack the defense tower, the operation control used is the pawn button.
  • the terminal can automatically adjust the operation controls displayed on the operation interface based on the element type of the target virtual element currently facing the target virtual object. For example, when the target virtual object is located in the wild area, and the terminal determines that the target virtual element corresponding to the target virtual object is a monster, the monster button is displayed on the operation control display area of the operation interface; when the target virtual object leaves the wild area , The terminal determines that the target virtual element corresponding to the target virtual object is a soldier, and then switches the monster key displayed in the display area of the operation control to a soldier key.
  • the technical solution provided by the embodiment of the present application obtains the position information of the target virtual object in the virtual scene. Since different types of virtual elements appear in different areas in the virtual scene, based on the position information and the virtual elements in the virtual scene At least one item in the target virtual object can determine the target virtual element corresponding to the current position of the target virtual object. Since the target virtual object needs to apply different operation controls when interacting with different types of virtual elements, the terminal can The operation control displayed in the control display area of the scene is switched to the target operation control corresponding to the element type of the target virtual element. When the user operates the target virtual object to interact with the target virtual element, there is no need to manually switch the operation control. Effectively reduce operational complexity and improve user experience.
  • FIG. 4 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application.
  • different virtual elements appear in different areas in the virtual scene, that is, target virtual objects.
  • the terminal can switch the operation controls displayed in the control display area based on the location of the target virtual object.
  • this embodiment specifically includes the following steps:
  • the terminal acquires location information of a target virtual object in a virtual scene, where the target virtual object is a virtual object controlled by the terminal.
  • the terminal When the terminal detects that the user enters a game, it can display the virtual scene corresponding to the game, and display at least one virtual object and other virtual elements in the virtual scene.
  • the terminal can obtain the position information of the target virtual object currently controlled by the user in real time.
  • the terminal can also acquire the position information of the target virtual object when it detects the user's drag operation on the target virtual object, that is, when the user controls the target virtual object to be in a moving state.
  • the user's drag operation of the target virtual object is implemented by the user through the virtual joystick on the operation interface.
  • the embodiment of the present application does not limit the specific method for obtaining the location information.
  • the position information can be expressed in the form of position coordinates (x, y, z), where (x, z) can be used to determine that the target virtual object is in the plane
  • the position in the direction, y can be used to determine the position of the target virtual object in the vertical direction.
  • the location information can also be expressed in other forms, which is not limited in the embodiment of the present application.
  • the terminal determines the area type of the area indicated by the location information based on the location information.
  • the developer can divide the map of the virtual scene into M*N squares, that is, into M*N areas, where M and N are both positive integers.
  • the values of M and N correspond to the length and width of the map respectively. If the resolution of the map is M*N, that is, each pixel in the map is regarded as an area. Of course, M and N can also be set to other values. The embodiment of the application does not limit this. Different location information can indicate different areas, and one area corresponds to one area type. For example, the area type can be a wild area, a non-wild area, etc., and the corresponding relationship between the area and the area type is set by the developer.
  • the configuration file of the application includes a map index table, and the map index table is used to store the location index and the area type corresponding to each area in the map.
  • the application program is also the application program of the virtual scene.
  • the position information of the area on the plane can be used as the position index.
  • the position index of the grid area Since each pixel is regarded as an area when the grid area is divided into the map, the two values in the position index corresponding to each grid area are integers. It should be noted that the above description of the method for determining the position index is only an exemplary description, and the embodiment of the present application does not limit which method is used to determine the position index.
  • the terminal can determine the position index of the area indicated by the position information based on the position information. For example, the terminal can obtain the position coordinates in the plane direction in the position information, that is, the x-axis value and For the value of the z-axis, take an integer for the two values to obtain (x', z') as the position index. Based on the location index, the terminal obtains the area type of the area from the map index table corresponding to the virtual scene, that is, the area type corresponding to the location index is used as the area type of the area. It should be noted that the above description of the method for determining the area type is only an exemplary description, and the embodiment of the present application does not limit which method is used to determine the area type.
  • the terminal switches the operation controls displayed in the control display area based on the area type.
  • different types of areas include different types of virtual elements, and when the target virtual object is located in different areas, the corresponding target virtual elements are also different.
  • the area type is a wild area
  • the virtual elements included in the wild area are monsters, excluding creeps
  • the corresponding target virtual element is also a monster.
  • the corresponding target virtual element is a minion.
  • the terminal since the difference of the area type can determine the difference of the target virtual element corresponding to the target virtual object, the terminal can switch the operation control based on the area type, that is, the virtual element corresponding to the area type
  • the element type of is determined as the element type of the target virtual element.
  • the terminal switches the operation control displayed in the control display area to an operation control corresponding to the element type of the target virtual element.
  • each time the terminal obtains the location information of the target virtual object it can update the operation control displayed in the control display area.
  • each time the terminal obtains the location information of the target virtual object it can compare the area type corresponding to the location information with the area type corresponding to the location information obtained last time, and determine the area Whether the type is changed, in response to the change of the area type, perform the step of updating the operation control.
  • the embodiment of the present application does not limit which operation control update method is specifically adopted.
  • FIG. 5 is a schematic diagram of an operation control switching process provided by an embodiment of the present application. See Figure 5.
  • the terminal can execute step 501 of obtaining the location information of the target virtual object, and execute the judgment step 502 based on the location information. Whether the target virtual object is in the wild area, if yes, execute step 503 of setting a monster key; if not, execute step 504 of setting a pawn key.
  • FIG. 6 is a schematic diagram of a control display area provided by an embodiment of the present application.
  • the area 602 in the control display area 601 is the display area of the pawn key and the monster key.
  • the icon of the monster button is displayed in the area 602, as shown in (a) in FIG. 6; in some embodiments, when the target virtual object leaves In the wild zone, the icon of the soldier button is displayed in the 602 area, as shown in Figure 6 (b).
  • the virtual elements included in different areas are of multiple types.
  • a certain area includes a type A virtual element, a type B virtual element, and a type C virtual element.
  • the user can perform three different operations Control, the control target virtual object interacts with these three types of virtual elements respectively.
  • the display positions of the three operation controls in the control display area are the same, that is, this position corresponds to the three candidate operation controls.
  • these three operation controls are also called candidate operation controls, that is, the terminal loads candidate operation controls.
  • the display condition is set by the developer.
  • the display condition is set such that the distance between the target virtual object and the virtual element is less than a distance threshold, etc., which is not limited in the embodiment of the present application.
  • the technical solution provided by the embodiment of the present application obtains the position information of the target virtual object in the virtual scene. Since different types of virtual elements appear in different areas in the virtual scene, based on the position information and the virtual elements in the virtual scene At least one item in the target virtual object can determine the target virtual element corresponding to the current position of the target virtual object. Since the target virtual object needs to apply different operation controls when interacting with different types of virtual elements, the terminal can The operation control displayed in the control display area of the scene is switched to the target operation control corresponding to the element type of the target virtual element. When the user operates the target virtual object to interact with the target virtual element, there is no need to manually switch the operation control. Effectively reduce operational complexity and improve user experience.
  • FIG. 7 is a flowchart of a method for displaying operation controls based on a virtual scene provided by an embodiment of the present application. Referring to FIG. 7, this embodiment may specifically include the following steps:
  • the terminal acquires location information of a target virtual object in a virtual scene, where the target virtual object is a virtual object controlled by the terminal.
  • the process for the terminal to obtain the location information is the same as that of the above step 401, and will not be repeated here.
  • the terminal determines the element type of the target virtual element corresponding to the target virtual object based on the location information of the target virtual object and the virtual element in the virtual scene.
  • the process includes any of the following multiple implementation manners:
  • Implementation manner 1 Based on the location information, the terminal determines the element type of the virtual element closest to the target virtual object from the virtual elements contained in the virtual scene as the element type of the target virtual element. That is, the terminal obtains the position information of the virtual elements that can interact with the target virtual object in the virtual scene, determines the distance between each virtual element that can interact with the target virtual object and the target virtual object, and sets the distance to the target virtual object.
  • the closest virtual element of the object serves as the target virtual element. For example, when the virtual element closest to the target virtual object is a defensive tower, the terminal uses the defensive tower as the target virtual element.
  • Implementation manner 2 The terminal uses the location indicated by the location information as the center, and determines the target area in the virtual scene.
  • the terminal determines the number of virtual elements contained in each element type in the target area.
  • the terminal determines the element type with the largest number of virtual elements as the element type of the target virtual element.
  • the target area is an effective interaction area, that is, the target virtual object can interact with virtual elements located in the target area, and the virtual elements located outside the target area cannot interact.
  • the radius of the target area is set by the developer, which is not limited in the embodiment of the application. For example, when the most virtual element contained in the target area is a fruit, the terminal can determine the fruit as the target virtual element.
  • Implementation manner 3 The terminal uses the location indicated by the location information as the center, and determines the target area in the virtual scene.
  • the terminal determines the interaction priority of each virtual element in the target area.
  • the terminal determines the element type of the virtual element with the highest interaction priority as the element type of the target virtual element.
  • the interaction priority is used to indicate the order of interaction between the target virtual object and various types of virtual elements.
  • the interaction priority is set by the developer, that is, set as a default value, or set by the user, which is not limited in the embodiment of the present application. For example, if the target area includes both a defensive tower and a fruit, and the interaction priority of the fruit is higher than that of the defensive tower, the terminal can determine the fruit as the target virtual element.
  • the types of virtual elements are rich and diverse. Applying the above solution, based on the location of the target virtual object controlled by the user and the element type of the virtual element near the location, the target virtual element is automatically determined for the user, and then executed The subsequent step of switching operation controls based on the types of interactive virtual elements can simplify user operation steps and reduce the difficulty of game operation.
  • the above-mentioned multiple methods can also be combined arbitrarily. For example, different weights can be assigned to the distance between the virtual element and the target virtual object, and the interaction priority corresponding to the virtual element. The distance between objects, the priority of interaction, the total weight corresponding to the virtual element is determined, and the virtual element with the largest total weight is determined as the target virtual element. It should be noted that the above description of determining the target virtual element based on the combination condition is only an exemplary description of a conditional combination method, and the process of determining the target virtual element based on other combination methods is the same as the above process, and will not be repeated here.
  • the terminal switches the operation controls displayed in the control display area based on the element type of the target virtual element.
  • the terminal can switch the operation control displayed in the control display area in the virtual scene to the target operation control corresponding to the element type of the target virtual element, where the target operation control is used to control the The target virtual object interacts with the target virtual element.
  • the candidate operation control corresponding to the control display position is obtained.
  • the display positions of the soldier key and the monster key in the virtual scene are the same, that is, both
  • the candidate operation controls corresponding to the control display position are the soldier key and the monster key.
  • the terminal can match the element type of the target virtual element with the display conditions of each candidate operation control.
  • the terminal determines the candidate operation control whose display condition is successfully matched with the element type of the target virtual element as the target operation control.
  • the terminal displays the target operation control at the display position of any control.
  • the terminal can determine whether the currently displayed operation control at the display position of any control is the same as the target operation control; if they are the same, continue to display the currently displayed operation control; if they are different, then the currently displayed operation control
  • the operation control is switched to the target operation control.
  • the display condition is set by the developer, for example, the display condition of the tower key is set to the current target virtual element as a defensive tower, and the display condition of the fruit key is set to the current target virtual element as the fruit.
  • Fig. 8 is a schematic diagram of an operation control switching process provided by an embodiment of the present application.
  • the terminal in response to the start of the game, the terminal can execute step 801 of obtaining the location information of the target virtual object, and execute the judgment step 802 based on the location information.
  • the step 803 of determining the element type of the target virtual element is performed, and the judgment step 804 is performed based on the target virtual element to determine whether the target virtual element is a defensive tower, and if so, execute the tower Key setting step 805, if not, execute fruit key setting step 806. If the target virtual object is not in the wild, then perform step 807 of the element type of the interactive element, and perform judgment step 808 based on the target virtual element to determine whether the target virtual element is a fruit, if so, perform fruit key setting step 806, if Otherwise, execute the fruit key setting step 805.
  • FIG. 9 is a schematic diagram of a control display area provided by an embodiment of the present application.
  • the area 902 in the control display area 901 is the display area of the tower key and the fruit key.
  • the icon of the tower key can be displayed in the 902 area, as shown in Figure 9 (a), when the target virtual element corresponding to the target virtual object is a fruit
  • the 902 area displays the icon of the fruit key, as shown in Figure 9 (b).
  • the technical solution provided by the embodiments of this application obtains the position information of the target virtual object. Since different regions in the virtual scene correspond to different virtual elements, the corresponding target virtual element can be determined based on the position of the target virtual object. Based on the location information of the target virtual object, the operation controls displayed on the operation interface can be automatically switched based on the element type of the target virtual object, and the operation controls that match the area where the target virtual object is located are displayed on the operation interface, that is, in the operation interface The operation control matching the target virtual element corresponding to the target virtual object is displayed.
  • the user does not need to manually switch the operation control during the operation, which can effectively reduce the operation complexity and improve the user experience.
  • it is not necessary to display all the operation controls on the operation interface and only display the operation controls related to the current target virtual element, which solves the problem that a large number of operation controls cannot be arranged due to the limitation of the terminal screen size.
  • FIG. 10 is a schematic structural diagram of an operation control display device based on a virtual scene provided by an embodiment of the present application.
  • the device includes: an acquisition module 1001, a determination module 1002, and a display module 1003.
  • the obtaining module 1001 is configured to obtain position information of a target virtual object in a virtual scene, and the target virtual object is a virtual object controlled by the terminal.
  • the determining module 1002 is configured to determine the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene.
  • the display module 1003 is used to display the target operation control corresponding to the element type of the target virtual element in the control display area in the virtual scene, and the target operation control is used to control the target virtual object to interact with the target virtual element .
  • the determining module 1002 is used to:
  • the location index of the area indicated by the location information is determined.
  • the area type of the area is obtained from the map index table corresponding to the virtual scene, and the map index table includes the position index of each area in the virtual scene and the area type of each area.
  • the element type of the virtual element corresponding to the area type is determined as the element type of the target virtual element.
  • the determining module 1002 is used to:
  • the element type of the virtual element closest to the target virtual object is determined as the element type of the target virtual element.
  • the determining module 1002 is used to:
  • the target area is determined in the virtual scene.
  • the element type containing the largest number of virtual elements is determined as the element type of the target virtual element.
  • the determining module 1002 is used to:
  • the target area is determined in the virtual scene.
  • the element type of the virtual element with the highest interaction priority is determined as the element type of the target virtual element.
  • the display module 1003 is used for:
  • the display module 1003 is used for:
  • the acquisition module 1001 is used to:
  • the position information of the target virtual object is acquired in real time.
  • the acquisition module 1001 is used to:
  • a step of determining the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene is performed.
  • the device provided by the embodiment of the present application acquires the position information of the target virtual object in the virtual scene. Since different types of virtual elements appear in different areas in the virtual scene, based on the position information, the virtual elements in the virtual scene At least one item of the target virtual object can determine the target virtual element corresponding to the current position of the target virtual object. Because the target virtual object needs to apply different operation controls when interacting with different types of virtual elements, the terminal can change the virtual scene The operation control displayed in the middle control display area is switched to the target operation control corresponding to the element type of the target virtual element. When the user operates the target virtual object to interact with the target virtual element, there is no need to manually switch the operation control, which can be effective Reduce operation complexity and improve user experience.
  • the operation control display device based on the virtual scene provided in the above embodiment displays operation controls
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned function assignments can be divided according to needs.
  • the function module is completed, that is, the internal structure of the device is divided into different function modules to complete all or part of the functions described above.
  • the virtual scene-based operation control display device provided in the above-mentioned embodiment and the virtual scene-based operation control display method embodiment belong to the same concept. For the specific implementation process, please refer to the method embodiment, which will not be repeated here.
  • the computer equipment provided by the above technical solutions can be implemented as a terminal or a server.
  • the computer equipment includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories.
  • a piece of program code is loaded by the one or more processors and performs the following operations:
  • the target operation control corresponding to the element type of the target virtual element is displayed, and the target operation control is used to control the target virtual object to interact with the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: based on the location information, determine the location index of the area indicated by the location information;
  • the map index table includes the location index of each area in the virtual scene and the area type of each area;
  • the element type of the virtual element corresponding to the area type is determined as the element type of the target virtual element.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: based on the position information, from the virtual elements contained in the virtual scene, the virtual element that is closest to the target virtual object is selected.
  • the element type of the element is determined as the element type of the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: determining the target area in the virtual scene with the position indicated by the position information as the center;
  • the element type containing the largest number of virtual elements is determined as the element type of the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: determining the target area in the virtual scene with the position indicated by the position information as the center;
  • the element type of the virtual element with the highest interaction priority is determined as the element type of the target virtual element.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations:
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: determining whether the operation control currently displayed at the control display position is the same as the target operation control;
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: in response to a movement operation on the target virtual object, the location information of the target virtual object is acquired in real time.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: determining the area type of the area indicated by the location information;
  • a step of determining the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene is performed.
  • FIG. 11 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
  • the terminal 1100 includes: one or more processors 1101 and one or more memories 1102.
  • the processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1101 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish.
  • the memory 1102 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1102 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1102 is used to store at least one piece of program code, and the at least one piece of program code is used to be executed by the processor 1101 to implement the method provided in the method embodiment of the present application. Operation control display method based on virtual scene.
  • the terminal 1100 may optionally further include: a peripheral device interface 1103 and at least one peripheral device.
  • the processor 1101, the memory 1102, and the peripheral device interface 1103 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1103 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1104, a display screen 1105, and a power supply 1106.
  • the peripheral device interface 1103 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1101 and the memory 1102.
  • I/O Input/Output
  • the radio frequency circuit 1104 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1104 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1104 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the display screen 1105 is used to display UI (User Interface, user interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 1105 also has the ability to collect touch signals on or above the surface of the display screen 1105.
  • the touch signal may be input to the processor 1101 as a control signal for processing.
  • the display screen 1105 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 1105 may be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light-emitting diode).
  • the power supply 1106 is used to supply power to various components in the terminal 1100.
  • the power supply 1106 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • FIG. 11 does not constitute a limitation on the terminal 1100, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • FIG. 12 is a schematic structural diagram of a server provided in an embodiment of the present application.
  • the server 1200 may have relatively large differences due to different configurations or performances, and may include one or more processes.
  • the server 1200 may also have components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server 1200 may also include other components for implementing device functions, which will not be repeated here.
  • a computer-readable storage medium such as a memory including at least one piece of program code, which can be executed by a processor to complete the display of operation controls based on a virtual scene in the foregoing embodiment.
  • the computer-readable storage medium may be Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disc Read-Only Memory (CD-ROM), Magnetic tapes, floppy disks and optical data storage devices, etc.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations:
  • the target operation control corresponding to the element type of the target virtual element is displayed, and the target operation control is used to control the target virtual object to interact with the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: based on the location information, determine the location index of the area indicated by the location information;
  • the map index table includes the location index of each area in the virtual scene and the area type of each area;
  • the element type of the virtual element corresponding to the area type is determined as the element type of the target virtual element.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: based on the position information, from the virtual elements contained in the virtual scene, the virtual element that is closest to the target virtual object is selected.
  • the element type of the element is determined as the element type of the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: determining the target area in the virtual scene with the position indicated by the position information as the center;
  • the element type containing the largest number of virtual elements is determined as the element type of the target virtual element.
  • the at least one program code is loaded by the one or more processors and performs the following operations: determining the target area in the virtual scene with the position indicated by the position information as the center;
  • the element type of the virtual element with the highest interaction priority is determined as the element type of the target virtual element.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: For any control display position in the control display area, obtain a candidate operation control corresponding to any control display position ;
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: determining whether the operation control currently displayed at the control display position is the same as the target operation control;
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: in response to a movement operation on the target virtual object, the location information of the target virtual object is acquired in real time.
  • the at least one piece of program code is loaded by the one or more processors and performs the following operations: determining the area type of the area indicated by the location information;
  • a step of determining the element type of the target virtual element corresponding to the target virtual object based on at least one of the location information and the virtual elements in the virtual scene is performed.
  • a computer program or computer program product including at least one piece of program code is also provided, which when running on a computer device, causes the computer device to execute the virtual scene-based operation control display provided by the foregoing various embodiments. The method will not be repeated here.

Abstract

一种基于虚拟场景的操作控件显示方法、装置、设备及存储介质,包括:获取目标虚拟对象在虚拟场景中的位置信息,由于虚拟场景中的不同区域会出现不同类型的虚拟元素,因此基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,能够确定该目标虚拟对象在当前位置所对应的目标虚拟元素,由于目标虚拟对象在与不同类型的虚拟元素进行互动时,需应用不同的操作控件,该终端将该虚拟场景中控件显示区域所显示的操作控件,切换为与该目标虚拟元素的元素类型相对应的目标操作控件。本发明中用户在操作目标虚拟对象与目标虚拟元素进行互动时,无需手动切换操作控件,能够降低用户操作复杂度。

Description

基于虚拟场景的操作控件显示方法及装置
本申请要求于2020年04月23日提交的申请号为202010329239.9、发明名称为“基于虚拟场景的操作控件显示方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种基于虚拟场景的操作控件显示方法、装置、设备及存储介质。
背景技术
随着计算机技术的发展以及终端功能的多样化,出现了越来越多的手机游戏,其中,角色扮演类游戏逐渐成为手机游戏中极为重要的一类游戏。在目前的角色扮演类游戏中,玩家可以控制自己的虚拟对象在虚拟场景中移动,并与虚拟场景中的其他虚拟元素进行互动。通常情况下,虚拟场景中可以包括多种类型的虚拟元素,玩家在与不同类型的虚拟元素进行互动时,需要应用不同的操作控件。
发明内容
本申请实施例提供了一种基于虚拟场景的操作控件显示方法、装置、设备及存储介质,该技术方案如下:
一方面,提供了一种基于虚拟场景的操作控件显示方法,应用于计算机设备,该方法包括:
获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象;
基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型;
在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,该目标操作控件用于控制该目标虚拟对象与该目标虚拟元素进行互动。
一方面,提供了一种基于虚拟场景的操作控件显示装置,该装置包括:
获取模块,用于获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象;
确定模块,用于基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型;
显示模块,用于在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,该目标操作控件用于控制该目标虚拟对象与该目标虚拟元素进行互动。
在一种可能实现方式中,该显示模块用于:
对于所述控件显示区域中的任一控件显示位置,获取所述任一控件显示位置对应的候选 操作控件;
将所述目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配;
将显示条件与所述可互动元素的元素类型匹配成功的候选操作控件,确定为所述目标操作控件;
在所述任一控件显示位置显示所述目标操作控件。
在一种可能实现方式中,该显示模块用于:
判断该控件显示位置当前显示的操作控件与该目标操作控件是否相同;
若相同,则继续显示该当前显示的操作控件;若不同,则将该当前显示的操作控件切换为该目标操作控件。
在一种可能实现方式中,该获取模块用于:
响应于对该目标虚拟对象的移动操作,实时获取该目标虚拟对象的位置信息。
在一种可能实现方式中,该获取模块用于:
确定该位置信息所指示区域的区域类型;
响应于该区域类型改变,执行基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型的步骤。
一方面,提供了一种计算机设备,该计算机设备包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器加载并执行以实现该基于虚拟场景的操作控件显示方法。
一方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有至少一条程序代码,该至少一条程序代码由处理器加载并执行以实现该基于虚拟场景的操作控件显示方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的实施环境示意图;
图2是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图;
图3是本申请实施例提供的一种操作界面示意图;
图4是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图;
图5是本申请实施例提供的一种操作控件切换过程的示意图;
图6是本申请实施例提供的一种控件显示区域的示意图;
图7是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图;
图8是本申请实施例提供的一种操作控件切换过程的示意图;
图9是本申请实施例提供的一种控件显示区域的示意图;
图10是本申请实施例提供的一种基于虚拟场景的操作控件显示装置的结构示意图;
图11是本申请实施例提供的一种终端的结构示意图;
图12是本申请实施例提供的一种服务器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了便于理解本申请实施例的技术过程,下面对本申请实施例所涉及的一些名词进行解释:
虚拟场景:是应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景和三维虚拟场景中的任意一种,本申请对此不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
虚拟对象:是指在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。可选地,该虚拟对象可以是通过在客户端上操作进行控制的角色,也可以是通过训练设置在虚拟环境对战中的人工智能(Artificial Intelligence,AI),还可以是设置在虚拟场景对战中的非玩家角色(Non-Player Character,NPC)。可选地,该虚拟对象是在虚拟场景中进行竞技的虚拟人物。可选地,该虚拟场景对战中的虚拟对象的数量可以是预设设置的,也可以是根据加入对战的客户端的数量动态确定的,本申请实施例对此不作限定。
MOBA(Multiplayer Online Battle Arena,多人在线战术竞技)游戏:是一种在虚拟场景中提供若干个据点,处于不同阵营的用户控制虚拟对象在虚拟场景中对战,占领据点或摧毁敌对阵营据点的游戏。例如,MOBA游戏可将用户分成至少两个敌对阵营,分属至少两个敌对阵营的不同虚拟队伍分别占据各自的地图区域,以某一种胜利条件作为目标进行竞技。其中,每个虚拟队伍包括一个或多个虚拟对象。该胜利条件包括但不限于:占领据点或摧毁敌对阵营据点、击杀敌对阵营的虚拟对象、在指定场景和时间内保证自身的存活、抢夺到某种资源、在指定时间内比分超过对方中的至少一种。MOBA游戏可以以局为单位来进行,每局战术竞技的地图可以相同,也可以不同。一局MOBA游戏的持续时间是从游戏开始的时刻至达成胜利条件的时刻。
在相关技术中,由于手机屏幕大小的限制,大量不同类型的操作控件并不能直接显示在操作区域,这就玩家导致在与不同的虚拟元素互动时,需要手动切换当前显示的操作控件,而且,操作控件的切换步骤往往较复杂,频繁手动切换操作控件会大大提升游戏的操作复杂度,影响用户体验。
图1是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的实施环境示意图,参见图1,该实施环境可以包括:第一终端110、服务器140和第二终端160。
第一终端110安装和运行有支持虚拟场景以及虚拟对象显示的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、角色扮演类游戏(Role-Playing Game,RPG)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)、多人枪战 类生存游戏中的任意一种。第一终端110是第一用户使用的终端,第一用户使用第一终端110操作位于虚拟场景中的第一虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示例性的,第一虚拟对象是第一虚拟人物,比如仿真人物角色或动漫人物角色。
第一终端110通过无线网络或有线网络与服务器140相连。
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器140用于为支持虚拟场景的应用程序提供后台服务。可选地,服务器140承担主要计算工作,第一终端110和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端110和第二终端160承担主要计算工作;或者,服务器140、第一终端110和第二终端160三者之间采用分布式计算架构进行协同计算。
第二终端160安装和运行有支持虚拟场景以及虚拟对象显示的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、角色扮演类游戏(Role-Playing Game,RPG)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)、多人枪战类生存游戏中的任意一种。第二终端160是第二用户使用的终端,第二用户使用第二终端160操作位于虚拟场景中的第二虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示意性的,第二虚拟对象是第二虚拟人物,比如仿真人物角色或动漫人物角色。
第二终端160通过无线网络或有线网络与服务器140相连。
可选地,第一终端110控制的第一虚拟对象和第二终端160控制的第二虚拟对象处于同一虚拟场景中,此时第一虚拟对象可以在虚拟场景中与第二虚拟对象进行互动。在一些实施例中,第一虚拟对象以及第二虚拟对象可以为敌对关系,例如,第一虚拟对象与第二虚拟对象可以属于不同的群组,敌对关系的虚拟对象之间,可以应用不同的技能互相攻击,从而进行对战方式的互动,在第一终端110和第二终端160中显示技能所触发的表现效果。
在另一些实施例中,第一虚拟对象以及第二虚拟对象可以为队友关系,例如,第一虚拟对象和第二虚拟对象可以属于同一个群组、具有好友关系或具有临时性的通讯权限。
可选地,第一终端110和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台的同一类型应用程序。第一终端110可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端110和第二终端160来举例说明。第一终端110和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机中的至少一种。例如,第一终端110和第二终端160可以是智能手机,或者其他手持便携式游戏设备。以下实施例,以终端包括智能手机来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
本申请实施例提供的基于虚拟场景的操作控件显示方法,能够应用于多种类型的应用程 序中,通过自动调整操作界面所显示的操作控件,来简化用户操作。在本申请实施例中,以该方法应用于MOBA游戏中为例,用户能够通过终端来操作目标虚拟对象在虚拟场景中进行移动,或者与虚拟场景中的虚拟元素进行互动。在一些实施例中,终端基于与该目标虚拟对象互动的虚拟元素的变化,来自动调整当前显示的虚拟控件,无需用户手动切换,降低操作难度。
图2是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图。该方法能够应用于计算机设备,在一些实施例中,该计算机设备为上述实施环境中的任一终端。在本申请实施例中,以终端作为执行主体为例,结合图2,对该基于虚拟场景的操作控件显示方法进行介绍:
201、终端获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象。
可选地,该目标虚拟对象为使用该终端的用户所控制的虚拟对象。
在一种可能实现方式中,该终端检测到用户进入一局游戏后,能够显示本局游戏对应的虚拟场景,该虚拟场景中可以包括多种类型的虚拟元素。其中,该虚拟元素可以包括虚拟对象、虚拟建筑、虚拟道具等,每个虚拟元素在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。例如,该虚拟元素可以为虚拟场景中的防御塔、小兵、怪物、果实等。本申请实施例对该虚拟元素的具体类型以及数目不作限定。当然,该终端还能在虚拟场景中的控件显示区域显示至少一个操作控件,该至少一个操作控件能够用于控制该目标虚拟对象与各个类型的虚拟元素进行互动,例如,该操作控件包括用于对小兵发起攻击的兵键,还包括用于对防御塔发起攻击的塔键,还包括用于对野区的怪物发起攻击的怪物键,还包括对虚拟场景中果实发起攻击的果实键,在一些实施例中,果实包括爆炸果实和恢复果实,爆炸果实有一个作用范围,当目标虚拟对象位于该作用范围,且目标虚拟对象对爆炸果实发动攻击时,爆炸果实能够发生爆炸,将目标虚拟对象“炸飞”至与爆炸果实相反的位置上。当玩家控制目标虚拟对象攻击恢复果实时,恢复果实能够散列为多个子恢复果实,目标虚拟对象与子恢复果实接触,就能够恢复健康值,在一些实施例中,健康值也被成为生命值。参见图3,图3是本申请实施例提供的一种操作界面示意图,该操作界面包括虚拟场景显示区域301、操作控件显示区域302,该操作控件显示区域302显示有多个操作控件303。
在一种可能实现方式中,用户能够通过对目标虚拟对象的拖拽等操作,控制该目标虚拟对象在该虚拟场景中移动,终端能够实时获取该目标虚拟对象在该虚拟场景中的位置信息。在一些实施例中,终端能够以坐标的形式来表示该位置信息。需要说明的是,本申请实施例对该终端获取该位置信息的具体方法不作限定。
202、终端基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型。
在一种可能实现方式中,该终端能够基于该目标虚拟对象的位置信息,或者基于当前虚拟场景所包含的虚拟元素,或者基于该位置信息以及当前虚拟场景所包含的虚拟元素,来确定该目标虚拟对象能够与哪个虚拟元素进行互动,也即是,终端确定该目标虚拟对象对应的目标虚拟元素,目标虚拟元素也即是能够与目标虚拟对象进行互动的互动元素,在一些实施例中,目标虚拟元素也被称为可互动元素。在一些实施例中,终端能够确定该目标虚拟元素的元素类型。例如,当该目标虚拟对象位于野区时,野区内所包含的虚拟元素为怪物,则该 目标虚拟对象所对应的目标虚拟元素为怪物,在一些实施例中,怪物也被称为野怪;当该目标虚拟对象位于除野区之外的其他区域时,在该其他区域所出现的虚拟元素包括小兵,而不包括怪物,则该目标虚拟对象所对应的目标虚拟元素为小兵。需要说明的是,上述对确定目标虚拟元素的方法的说明,仅是一种示例性说明,本申请实施例对具体采用哪种方法确定目标虚拟元素不作限定。
203、终端在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,该目标操作控件用于控制该目标虚拟对象与该目标虚拟元素进行互动。
可选地,控件显示区域为用于显示操作控件的区域,在一些实施例中,控件显示区域中显示有一个或多个操作控件。
在本申请实施例中,目标虚拟对象与不同类型的虚拟元素进行互动时,能够通过不同的操作控件来实现。例如,在控制目标虚拟对象攻击怪物时,使用的操作控件为怪物键;在控制目标虚拟对象攻击小兵时,使用的操作控件为兵键;在控制目标虚拟对象攻击防御塔时,使用的操作控件为塔键。
在一种可能实现方式中,终端能够基于目标虚拟对象当前面对的目标虚拟元素的元素类型,来自动调整操作界面所显示的操作控件。例如,当该目标虚拟对象位于野区时,终端确定该目标虚拟对象对应的目标虚拟元素为怪物,则在该操作界面的操作控件显示区域上显示怪物键;当该目标虚拟对象离开野区时,终端确定该目标虚拟对象对应的目标虚拟元素为小兵,则将该操作控件显示区域所显示的怪物键切换为兵键。
本申请实施例提供的技术方案,通过获取目标虚拟对象在虚拟场景中的位置信息,由于虚拟场景中的不同区域会出现不同类型的虚拟元素,因此基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,能够确定该目标虚拟对象在当前位置所对应的目标虚拟元素,由于目标虚拟对象在与不同类型的虚拟元素进行互动时,需应用不同的操作控件,该终端能够将该虚拟场景中控件显示区域所显示的操作控件,切换为与该目标虚拟元素的元素类型相对应的目标操作控件,用户在操作目标虚拟对象与该目标虚拟元素进行互动时,无需手动切换操作控件,能够有效降低操作复杂度,提升用户体验。
上述实施例是对基于虚拟场景的操作控件显示方法的简要介绍,下面将结合图4,对该方法进行具体说明。图4是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图,在一种可能实现方式中,虚拟场景中的不同区域会出现不同的虚拟元素,也即是目标虚拟对象位于不同区域时,其对应的目标虚拟元素也不同,因此,终端能够基于该目标虚拟对象所处的位置,来切换控件显示区域所显示的操作控件。参见图4,该实施例具体包括以下步骤:
401、终端获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象。
当终端检测到用户进入一局游戏后,能够显示本局游戏对应的虚拟场景,在该虚拟场景中显示至少一个虚拟对象以及其他虚拟元素。在一种可能实现方式中,游戏开局后,该终端能够实时获取用户当前所控制的目标虚拟对象的位置信息。可选地,该终端也能够在检测到用户对目标虚拟对象的拖拽等操作时,即用户控制目标虚拟对象处于移动状态时,再获取该目标虚拟对象的位置信息。在一些实施例中,用户对目标虚拟对象的拖动操作由用户通过操 作界面上的虚拟摇杆来实现。本申请实施例对该位置信息的具体获取方法不作限定。
在一种可能实现方式中,当该虚拟场景为三维场景时,该位置信息能够表示为位置坐标(x,y,z)的形式,其中(x,z)能够用于确定目标虚拟对象在平面方向上的位置,y可以用于确定目标虚拟对象在垂直方向上的位置。当然,该位置信息也能够表示为其他形式,本申请实施例对此不作限定。
402、终端基于该位置信息,确定该位置信息所指示区域的区域类型。
在一种可能实现方式中,开发人员能够将虚拟场景的地图划分为M*N个方格,即划分为M*N个区域,其中,M和N均为正整数,在一些实施例中,M和N的数值分别对应于地图的长和宽,若地图的分辨率为M*N,也即是将地图中的每一个像素作为一个区域,当然,M和N也可以设置为其他数值,本申请实施例对此不作限定。不同位置信息能够指示不同区域,一个区域对应于一个区域类型,例如,该区域类型可以为野区、非野区等,区域与区域类型的对应关系由开发人员进行设置。
在一种可能实现方式中,应用程序的配置文件中包括地图索引表,该地图索引表用于存储地图中各个区域对应的位置索引和区域类型。其中,该应用程序也即是虚拟场景的应用程序。在一些实施例中,能够将该区域在平面上的位置信息作为该位置索引,例如,对于方格区域的某一顶点来说,能够将该顶点在平面方向上的位置坐标(x,z)作为该方格区域的位置索引。由于在对地图进行方格区域划分时,每一像素作为一个区域,则每个方格区域对应的位置索引中的两个数值均为整数。需要说明的是,上述对位置索引确定方法的说明,仅是一种示例性说明,本申请实施例对具体采用哪种方法确定位置索引不作限定。
在一种可能实现方式中,该终端能够基于该位置信息,确定该位置信息所指示区域的位置索引,例如,该终端可以获取该位置信息中平面方向上的位置坐标,即x轴的数值和z轴的数值,对两个数值取整数,得到(x’,z’)作为位置索引。终端再基于该位置索引,从该虚拟场景对应的地图索引表中,获取该区域的区域类型,也即是将该位置索引对应的区域类型作为该区域的区域类型。需要说明的是,上述对区域类型确定方法的说明,仅是一种示例性说明,本申请实施例对具体采用哪种方法确定区域类型不作限定。
403、终端基于该区域类型,切换控件显示区域所显示的操作控件。
在本申请实施例中,不同类型的区域包括不同类型的虚拟元素,则当该目标虚拟对象位于不同区域时,所对应的目标虚拟元素也不同。例如,当区域类型为野区时,野区所包括的虚拟元素为怪物,而不包括小兵,则该目标虚拟对象进入野区时,其对应的目标虚拟元素也就为怪物,当该目标虚拟对象离开野区时,则其对应的目标虚拟元素为小兵。在本申请实施例中,由于区域类型的不同能够决定目标虚拟对象所对应目标虚拟元素的不同,终端能够基于区域类型来对操作控件进行切换,也即是,将该区域类型所对应的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。终端将控件显示区域所显示的操作控件切换为与该目标虚拟元素的元素类型相对应的操作控件。在一种可能实现方式中,该终端每次获取到目标虚拟对象的位置信息时,均能够对该控件显示区域所显示的操作控件进行更新。在一种可能实现方式中,该终端每次获取到目标虚拟对象的位置信息时,能够将该位置信息所对应的区域类型与上一次获取到的位置信息所对应的区域类型进行对比,判断区域类型是否改变,响应于该区域类型改变,再执行操作控件更新的步骤。本申请实施例对具体采用哪种操作控件更新方法不作限定。
下面将结合图5和图6,以兵键和怪物键的切换过程为例,对上述操作控件切换过程进行说明。图5是本申请实施例提供的一种操作控件切换过程的示意图,参见图5,响应于游戏开局,该终端能够执行获取目标虚拟对象位置信息的步骤501,基于位置信息执行判断步骤502,判断该目标虚拟对象是否在野区,若是,则执行设置怪物键的步骤503,若不是,则执行设置兵键的步骤504。
图6是本申请实施例提供的一种控件显示区域的示意图,参见图6,控件显示区域601中的602区域即为兵键和怪物键的显示区域。在一些实施例中,当该目标虚拟对象进行野区时,该602区域显示有怪物键的图标,如图6中的(a)图所示;在一些实施例中,当该目标虚拟对象离开野区时,该602区域显示有兵键的图标,如图6中的(b)图所示。
在一种可能实现方式中,不同区域所包括的虚拟元素是多种类型的,例如,某一区域包括A类虚拟元素、B类虚拟元素、C类虚拟元素,用户能够通过三个不同的操作控件,控制目标虚拟对象分别与这三个类型的虚拟元素进行互动。在一些实施例中,这三个操作控件在控件显示区域的显示位置相同,即这一位置对应三个候选操作控件。当终端确定目标虚拟对象位于该某一区域时,可以预先加载这三个操作控件,此时,这三个操作控件也被称为候选操作控件,也即是终端加载候选操作控件,在该目标虚拟对象的移动过程中,当该目标虚拟对象所处的位置满足任一个操作控件的显示条件时,再将该任一个操作控件的图标进行显示。可选地,该显示条件由开发人员进行设置,例如,该显示条件设置为目标虚拟对象与虚拟元素之间的距离小于距离阈值等,本申请实施例对此不作限定。应用上述方案,终端基于目标虚拟对象的位置信息,对操作控件进行提前加载,能够提高操作控件的切换效率。
本申请实施例提供的技术方案,通过获取目标虚拟对象在虚拟场景中的位置信息,由于虚拟场景中的不同区域会出现不同类型的虚拟元素,因此基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,能够确定该目标虚拟对象在当前位置所对应的目标虚拟元素,由于目标虚拟对象在与不同类型的虚拟元素进行互动时,需应用不同的操作控件,该终端能够将该虚拟场景中控件显示区域所显示的操作控件,切换为与该目标虚拟元素的元素类型相对应的目标操作控件,用户在操作目标虚拟对象与该目标虚拟元素进行互动时,无需手动切换操作控件,能够有效降低操作复杂度,提升用户体验。
在一种可能实现方式中,终端还能够基于目标虚拟对象的位置信息以及虚拟场景中的虚拟元素,来综合确定目标虚拟对象当前所对应的目标虚拟元素,进而基于当前目标虚拟元素的元素类型,来切换操作控件。图7是本申请实施例提供的一种基于虚拟场景的操作控件显示方法的流程图,参见图7,该实施例具体可以包括以下步骤:
701、终端获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象。
终端获取位置信息的过程与上述步骤401同理,在此不做赘述。
702、终端基于该目标虚拟对象的位置信息以及该虚拟场景内的虚拟元素,确定该目标虚拟对象所对应的目标虚拟元素的元素类型。
在一种可能实现方式中,该过程包括以下多种实现方式中的任一种:
实现方式一、终端基于该位置信息,从该虚拟场景所包含的虚拟元素中,将与该目标虚拟对象距离最近的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。也即是,终端 获取虚拟场景中能够与该目标虚拟对象进行互动的虚拟元素的位置信息,确定各个能够与该目标虚拟对象进行互动的虚拟元素与该目标虚拟对象的距离,将距离该目标虚拟对象最近的虚拟元素作为该目标虚拟元素。例如,距离该目标虚拟对象最近的虚拟元素为防御塔时,则终端将该防御塔作为该目标虚拟元素。
实现方式二、终端以该位置信息所指示的位置为中心,在该虚拟场景中确定目标区域。终端确定该目标区域中各个元素类型所包含虚拟元素的数目。终端将包含虚拟元素的数目最多的元素类型,确定为该目标虚拟元素的元素类型。在一些实施例中,该目标区域为有效互动区域,即该目标虚拟对象能够与位于该目标区域内的虚拟元素进行互动,对于位于该目标区域外的虚拟元素,则无法进行互动,可选地,该目标区域的半径由开发人员进行设置,本申请实施例对此不作限定。例如,该目标区域内包含最多的虚拟元素为果实时,则终端能够将果实确定为该目标虚拟元素。
实现方式三、终端以该位置信息所指示的位置为中心,在该虚拟场景中确定目标区域。终端确定该目标区域中各个虚拟元素的互动优先级。终端将互动优先级最高的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。在一些实施例中,该互动优先级用于指示该目标虚拟对象与各类型的虚拟元素的互动顺序。可选地,该互动优先级由开发人员进行设置,即设置为默认值,或者由用户自行设置,本申请实施例对此不作限定。例如,该目标区域中同时包括防御塔和果实,且果实的互动优先级高于防御塔,终端能够将果实确定为该目标虚拟元素。
在MOBA游戏中,虚拟元素的类型丰富多样,应用上述方案,基于用户所控制的目标虚拟对象的所处位置、该位置附近的虚拟元素的元素类型,来自动为用户确定目标虚拟元素,进而执行后续的基于可互动虚拟元素的类型来切换操作控件的步骤,能够简化用户操作步骤,降低游戏的操作难度。
在本申请实施例中,上述多种方法可以也任意组合,例如,可以为虚拟元素与该目标虚拟对象之间的距离、虚拟元素对应的互动优先级分配不同的权重,综合虚拟元素与目标虚拟对象之间的距离、互动优先级,确定虚拟元素对应的总权重,将总权重最大的虚拟元素,确定为目标虚拟元素。需要说明的是,上述对基于组合条件确定目标虚拟元素的说明,仅是一种条件组合方式的示例性说明,基于其他组合方式确定目标虚拟元素的过程与上述过程同理,在此不作赘述。
需要说明的是,上述对目标虚拟元素的元素类型确定方法的说明,仅是一种示例性说明,本申请实施例对目标虚拟元素的元素类型确定方法不作限定。
703、终端基于该目标虚拟元素的元素类型,切换控件显示区域所显示的操作控件。
在本申请实施例中,终端能够将该虚拟场景中控件显示区域所显示的操作控件,切换为与该目标虚拟元素的元素类型相对应的目标操作控件,其中,该目标操作控件用于控制该目标虚拟对象与该目标虚拟元素进行互动。在一些实施例中,对于该控件显示区域中的任一控件显示位置,获取该任一控件显示位置对应的候选操作控件,例如,兵键和怪物键在虚拟场景中的显示位置相同,即均显示在该控件显示区域中的任一控件显示位置,则该任一控件显示位置对应的候选操作控件为兵键和怪物键。终端能够将该目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配。终端将显示条件与该目标虚拟元素的元素类型匹配成功的候选操作控件,确定为目标操作控件。终端在该任一控件显示位置显示该目标操作控件。在 一些实施例中,终端能够判断该任一控件显示位置当前显示的操作控件与该目标操作控件是否相同;若相同,则继续显示该当前显示的操作控件;若不同,则将该当前显示的操作控件切换为该目标操作控件。可选地,该显示条件由开发人员进行设置,例如,塔键的显示条件设置为当前的目标虚拟元素为防御塔,果实键的显示条件设置为当前的目标虚拟元素为果实。
具体的,结合图8和图9,以塔键和果实键的切换过程为例进行说明。图8是本申请实施例提供的一种操作控件切换过程的示意图,参见图8,响应于游戏开局,该终端能够执行获取目标虚拟对象位置信息的步骤801,基于位置信息执行判断步骤802,判断该目标虚拟对象是否在野区,若在野区,则执行确定目标虚拟元素的元素类型的步骤803,基于该目标虚拟元素执行判断步骤804,判断该目标虚拟元素是否为防御塔,若是,则执行塔键设置步骤805,若否,则执行果实键设置步骤806。若该目标虚拟对象不在野区,则执行互动元素的元素类型的步骤807,基于该目标虚拟元素执行判断步骤808,判断该目标虚拟元素是否为果实,若是,则执行果实键设置步骤806,若否,则执行果实键设置步骤805。也即是,不论该目标虚拟对象是否在野区,终端均需确定目标虚拟对象所对应的目标虚拟元素,基于该目标虚拟元素的元素类型,来确定显示哪种操作控件。图9是本申请实施例提供的一种控件显示区域的示意图,参见图9,控件显示区域901中的902区域即为塔键和果实键的显示区域。当该目标虚拟对象对应的目标虚拟元素为防御塔时,该902区域可以显示有塔键的图标,如图9中的(a)图所示,当该目标虚拟对象对应的目标虚拟元素为果实时,该902区域显示有果实键的图标,如图9中的(b)图所示。
本申请实施例提供的技术方案,通过获取目标虚拟对象的位置信息,由于虚拟场景中不同区域对应于不同的虚拟元素,因此,能够基于目标虚拟对象的位置,来确定其所对应的目标虚拟元素的元素类型,进而,可以基于该目标虚拟对象的位置信息,自动切换操作界面所显示的操作控件,在操作界面显示与该目标虚拟对象所在区域相匹配的操作控件,也即是,在操作界面显示与该目标虚拟对象所对应的目标虚拟元素相匹配的操作控件。应用上述方案,一方面,用户在操作过程中,无需手动切换操作控件,能够有效降低操作复杂度,提升用户体验。另一方面,无需在操作界面显示全部操作控件,仅显示与当前目标虚拟元素相关的操作控件即可,解决了因终端屏幕大小限制而无法排列大量操作控件的问题。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
图10是本申请实施例提供的一种基于虚拟场景的操作控件显示装置的结构示意图,参见图10,该装置包括:获取模块1001、确定模块1002以及显示模块1003。
获取模块1001,用于获取目标虚拟对象在虚拟场景中的位置信息,该目标虚拟对象为终端控制的虚拟对象。
确定模块1002,用于基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型。
显示模块1003,用于在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,该目标操作控件用于控制该目标虚拟对象与该目标虚拟元素进行互动。
在一种可能实现方式中,该确定模块1002用于:
基于该位置信息,确定该位置信息所指示区域的位置索引。
基于该位置索引,从该虚拟场景对应的地图索引表中,获取该区域的区域类型,该地图索引表包括该虚拟场景中各个区域的位置索引以及该各个区域的区域类型。
将该区域类型所对应的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。
在一种可能实现方式中,该确定模块1002用于:
基于该位置信息,从该虚拟场景所包含的虚拟元素中,将与该目标虚拟对象距离最近的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。
在一种可能实现方式中,该确定模块1002用于:
以该位置信息所指示的位置为中心,在该虚拟场景中确定目标区域。
确定该目标区域中各个元素类型所包含虚拟元素的数目。
将包含虚拟元素的数目最多的元素类型,确定为该目标虚拟元素的元素类型。
在一种可能实现方式中,该确定模块1002用于:
以该位置信息所指示的位置为中心,在该虚拟场景中确定目标区域。
确定该目标区域中各个虚拟元素的互动优先级。
将互动优先级最高的虚拟元素的元素类型,确定为该目标虚拟元素的元素类型。
在一种可能实现方式中,该显示模块1003用于:
对于该控件显示区域中的任一控件显示位置,获取该任一控件显示位置对应的候选操作控件;
将该目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配;
将显示条件与该可互动元素的元素类型匹配成功的候选操作控件,确定为该目标操作控件;
在该任一控件显示位置显示该目标操作控件。
在一种可能实现方式中,该显示模块1003用于:
判断该控件显示位置当前显示的操作控件与该目标操作控件是否相同。
若相同,则继续显示该当前显示的操作控件。若不同,则将该当前显示的操作控件切换为该目标操作控件。
在一种可能实现方式中,该获取模块1001用于:
响应于对该目标虚拟对象的移动操作,实时获取该目标虚拟对象的位置信息。
在一种可能实现方式中,该获取模块1001用于:
确定该位置信息所指示区域的区域类型。
响应于该区域类型改变,执行基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,确定该目标虚拟对象对应的目标虚拟元素的元素类型的步骤。
本申请实施例提供的装置,通过获取目标虚拟对象在虚拟场景中的位置信息,由于虚拟场景中的不同区域会出现不同类型的虚拟元素,因此基于该位置信息、该虚拟场景内的虚拟元素中的至少一项,能够确定该目标虚拟对象在当前位置所对应的目标虚拟元素,由于目标虚拟对象在与不同类型的虚拟元素进行互动时,需应用不同的操作控件,该终端能够将该虚拟场景中控件显示区域所显示的操作控件,切换为与该目标虚拟元素的元素类型相对应的目标操作控件,用户在操作目标虚拟对象与该目标虚拟元素进行互动时,无需手动切换操作控 件,能够有效降低操作复杂度,提升用户体验。
需要说明的是:上述实施例提供的基于虚拟场景的操作控件显示装置在显示操作控件时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的基于虚拟场景的操作控件显示装置与基于虚拟场景的操作控件显示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
上述技术方案所提供的计算机设备可以实现为终端或服务器,例如,该计算机设备包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:
获取目标虚拟对象在虚拟场景中的位置信息,目标虚拟对象为终端控制的虚拟对象;
基于位置信息、虚拟场景内的虚拟元素中的至少一项,确定目标虚拟对象对应的目标虚拟元素的元素类型;
在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,目标操作控件用于控制目标虚拟对象与目标虚拟元素进行互动。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:基于位置信息,确定位置信息所指示区域的位置索引;
基于位置索引,从虚拟场景对应的地图索引表中,获取区域的区域类型,地图索引表包括虚拟场景中各个区域的位置索引以及各个区域的区域类型;
将区域类型所对应的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:基于位置信息,从虚拟场景所包含的虚拟元素中,将与目标虚拟对象距离最近的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:以位置信息所指示的位置为中心,在虚拟场景中确定目标区域;
确定目标区域中各个元素类型所包含虚拟元素的数目;
将包含虚拟元素的数目最多的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:以位置信息所指示的位置为中心,在虚拟场景中确定目标区域;
确定目标区域中各个虚拟元素的互动优先级;
将互动优先级最高的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:
对于控件显示区域中的任一控件显示位置,获取任一控件显示位置对应的候选操作控件;
将目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配;
将显示条件与可互动元素的元素类型匹配成功的候选操作控件,确定为目标操作控件;
在任一控件显示位置显示目标操作控件。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:判断控件显示位置当前显示的操作控件与目标操作控件是否相同;
若相同,则继续显示当前显示的操作控件;若不同,则将当前显示的操作控件切换为目标操作控件。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:响应于对目标虚拟对象的移动操作,实时获取目标虚拟对象的位置信息。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:确定位置信息所指示区域的区域类型;
响应于区域类型改变,执行基于位置信息、虚拟场景内的虚拟元素中的至少一项,确定目标虚拟对象对应的目标虚拟元素的元素类型的步骤。
下面以计算机设备为终端为例进行说明,图11是本申请实施例提供的一种终端的结构示意图。通常,终端1100包括有:一个或多个处理器1101和一个或多个存储器1102。
处理器1101可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1101可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。
存储器1102可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1102还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1102中的非暂态的计算机可读存储介质用于存储至少一条程序代码,该至少一条程序代码用于被处理器1101所执行以实现本申请中方法实施例提供的基于虚拟场景的操作控件显示方法。
在一些实施例中,终端1100还可选包括有:外围设备接口1103和至少一个外围设备。处理器1101、存储器1102和外围设备接口1103之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1103相连。具体地,外围设备包括:射频电路1104、显示屏1105、和电源1106中的至少一种。
外围设备接口1103可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1101和存储器1102。
射频电路1104用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1104通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1104将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。
显示屏1105用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1105是触摸显示屏时,显示屏1105还具有采集在显示屏1105的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1101进行处理。此时,显示屏1105还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在显示屏1105可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
电源1106用于为终端1100中的各个组件进行供电。电源1106可以是交流电、直流电、一次性电池或可充电电池。当电源1106包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
本领域技术人员可以理解,图11中示出的结构并不构成对终端1100的限定,可以包括 比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
下面以计算机设备为服务器为例进行说明,图12是本申请实施例提供的一种服务器的结构示意图,该服务器1200可因配置或性能不同而产生比较大的差异,可以包括一个或多个处理器(Central Processing Units,CPU)1201和一个或多个的存储器1202,其中,该一个或多个存储器1202中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器1201加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器1200还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器1200还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括至少一条程序代码的存储器,上述至少一条程序代码可由处理器执行以完成上述实施例中的基于虚拟场景的操作控件显示方法。例如,该计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:
获取目标虚拟对象在虚拟场景中的位置信息,目标虚拟对象为终端控制的虚拟对象;
基于位置信息、虚拟场景内的虚拟元素中的至少一项,确定目标虚拟对象对应的目标虚拟元素的元素类型;
在该虚拟场景中的控件显示区域,显示与该目标虚拟元素的元素类型相对应的目标操作控件,目标操作控件用于控制目标虚拟对象与目标虚拟元素进行互动。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:基于位置信息,确定位置信息所指示区域的位置索引;
基于位置索引,从虚拟场景对应的地图索引表中,获取区域的区域类型,地图索引表包括虚拟场景中各个区域的位置索引以及各个区域的区域类型;
将区域类型所对应的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:基于位置信息,从虚拟场景所包含的虚拟元素中,将与目标虚拟对象距离最近的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:以位置信息所指示的位置为中心,在虚拟场景中确定目标区域;
确定目标区域中各个元素类型所包含虚拟元素的数目;
将包含虚拟元素的数目最多的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:以位置信息所指示的位置为中心,在虚拟场景中确定目标区域;
确定目标区域中各个虚拟元素的互动优先级;
将互动优先级最高的虚拟元素的元素类型,确定为目标虚拟元素的元素类型。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操 作:对于控件显示区域中的任一控件显示位置,获取任一控件显示位置对应的候选操作控件;
将目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配;
将显示条件与可互动元素的元素类型匹配成功的候选操作控件,确定为目标操作控件;
在任一控件显示位置显示目标操作控件。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:判断控件显示位置当前显示的操作控件与目标操作控件是否相同;
若相同,则继续显示当前显示的操作控件;若不同,则将当前显示的操作控件切换为目标操作控件。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:响应于对目标虚拟对象的移动操作,实时获取目标虚拟对象的位置信息。
在一种可能实现方式中,该至少一条程序代码由该一个或多个处理器加载并执行如下操作:确定位置信息所指示区域的区域类型;
响应于区域类型改变,执行基于位置信息、虚拟场景内的虚拟元素中的至少一项,确定目标虚拟对象对应的目标虚拟元素的元素类型的步骤。
在一些实施例中,还提供一种包括至少一条程序代码的计算机程序或计算机程序产品,当其在计算机设备上运行时,使得计算机设备执行前述各个实施例所提供的基于虚拟场景的操作控件显示方法,在此不作赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来至少一条程序代码相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
上述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种基于虚拟场景的操作控件显示方法,应用于计算机设备,所述方法包括:
    获取目标虚拟对象在虚拟场景中的位置信息,所述目标虚拟对象为终端控制的虚拟对象;
    基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型;
    在所述虚拟场景中的控件显示区域,显示与所述目标虚拟元素的元素类型相对应的目标操作控件,所述目标操作控件用于控制所述目标虚拟对象与所述目标虚拟元素进行互动。
  2. 根据权利要求1所述的方法,其中,所述基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型,包括:
    基于所述位置信息,确定所述位置信息所指示区域的位置索引;
    基于所述位置索引,从所述虚拟场景对应的地图索引表中,获取所述区域的区域类型,所述地图索引表包括所述虚拟场景中各个区域的位置索引以及所述各个区域的区域类型;
    将所述区域类型所对应的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  3. 根据权利要求1所述的方法,其中,所述基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型,包括:
    基于所述位置信息,从所述虚拟场景所包含的虚拟元素中,将与所述目标虚拟对象距离最近的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  4. 根据权利要求1所述的方法,其中,所述基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型,包括:
    以所述位置信息所指示的位置为中心,在所述虚拟场景中确定目标区域;
    确定所述目标区域中各个元素类型所包含虚拟元素的数目;
    将包含虚拟元素的数目最多的元素类型,确定为所述目标虚拟元素的元素类型。
  5. 根据权利要求1所述的方法,其中,所述基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型,包括:
    以所述位置信息所指示的位置为中心,在所述虚拟场景中确定目标区域;
    确定所述目标区域中各个虚拟元素的互动优先级;
    将互动优先级最高的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  6. 根据权利要求1所述的方法,其中,所述在所述虚拟场景中的控件显示区域,显示与所述目标虚拟元素的元素类型相对应的目标操作控件,包括:
    对于所述控件显示区域中的任一控件显示位置,获取所述任一控件显示位置对应的候选操作控件;
    将所述目标虚拟元素的元素类型与各个候选操作控件的显示条件相匹配;
    将显示条件与所述可互动元素的元素类型匹配成功的候选操作控件,确定为所述目标操作控件;
    在所述任一控件显示位置显示所述目标操作控件。
  7. 根据权利要求6所述的方法,其中,所述在所述控件显示位置,显示所述目标操作控件,包括:
    判断所述控件显示位置当前显示的操作控件与所述目标操作控件是否相同;
    若相同,则继续显示所述当前显示的操作控件;若不同,则将所述当前显示的操作控件切换为所述目标操作控件。
  8. 根据权利要求1所述的方法,其中,所述获取目标虚拟对象在虚拟场景中的位置信息,包括:
    响应于对所述目标虚拟对象的移动操作,实时获取所述目标虚拟对象的位置信息。
  9. 根据权利要求8所述的方法,其中,所述实时获取所述目标虚拟对象的位置信息之后,方法还包括:
    确定所述位置信息所指示区域的区域类型;
    响应于所述区域类型改变,执行基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型的步骤。
  10. 一种基于虚拟场景的操作控件显示装置,所述装置包括:
    获取模块,用于获取目标虚拟对象在虚拟场景中的位置信息,所述目标虚拟对象为终端控制的虚拟对象;
    确定模块,用于基于所述位置信息、所述虚拟场景内的虚拟元素中的至少一项,确定所述目标虚拟对象对应的目标虚拟元素的元素类型;
    显示模块,用于在所述虚拟场景中的控件显示区域,显示与所述目标虚拟元素的元素类型相对应的目标操作控件,所述目标操作控件用于控制所述目标虚拟对象与所述目标虚拟元素进行互动。
  11. 根据权利要求10所述的装置,其中,所述确定模块用于:
    基于所述位置信息,确定所述位置信息所指示区域的位置索引;
    基于所述位置索引,从所述虚拟场景对应的地图索引表中,获取所述区域的区域类型,所述地图索引表包括所述虚拟场景中各个区域的位置索引以及所述各个区域的区域类型;
    将所述区域类型所对应的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  12. 根据权利要求10所述的装置,其中,所述确定模块用于:
    基于所述位置信息,从所述虚拟场景所包含的虚拟元素中,将与所述目标虚拟对象距离最近的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  13. 根据权利要求10所述的装置,其中,所述确定模块用于:
    以所述位置信息所指示的位置为中心,在所述虚拟场景中确定目标区域;
    确定所述目标区域中各个元素类型所包含虚拟元素的数目;
    将包含虚拟元素的数目最多的元素类型,确定为所述目标虚拟元素的元素类型。
  14. 根据权利要求10所述的装置,其中,所述确定模块用于:
    以所述位置信息所指示的位置为中心,在所述虚拟场景中确定目标区域;
    确定所述目标区域中各个虚拟元素的互动优先级;
    将互动优先级最高的虚拟元素的元素类型,确定为所述目标虚拟元素的元素类型。
  15. 一种计算机设备,所述计算机设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如权利要求1至权利要求9任一项所述的基于虚拟场景的操作控件显示方法。
  16. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如权利要求1至权利要求9任一项所述的基于虚拟场景的操作控件显示方法。
PCT/CN2021/079516 2020-04-23 2021-03-08 基于虚拟场景的操作控件显示方法及装置 WO2021213021A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
SG11202111645SA SG11202111645SA (en) 2020-04-23 2021-03-08 Operation control display method and apparatus based on virtual scene
JP2021565788A JP7331137B2 (ja) 2020-04-23 2021-03-08 仮想シーンに基づく操作コントロールの表示方法、装置、コンピュータ機器及びコンピュータプログラム
KR1020217035221A KR20210143309A (ko) 2020-04-23 2021-03-08 가상 장면에 기초한 조작 컨트롤 디스플레이 방법 및 장치
CA3132506A CA3132506A1 (en) 2020-04-23 2021-03-08 Operation control display method and apparatus based on virtual scene
EP21786075.8A EP3943174A4 (en) 2020-04-23 2021-03-08 VIRTUAL SCENE BASED OPERATION CONTROL DISPLAY APPARATUS AND METHOD
AU2021240131A AU2021240131B2 (en) 2020-04-23 2021-03-08 Operation control display method and apparatus based on virtual scene
US17/501,989 US20220032186A1 (en) 2020-04-23 2021-10-14 Operation control display method and apparatus based on virtual scene
JP2023130356A JP2023159212A (ja) 2020-04-23 2023-08-09 仮想シーンに基づく操作コントロールの表示方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010329239.9 2020-04-23
CN202010329239.9A CN111589128B (zh) 2020-04-23 2020-04-23 基于虚拟场景的操作控件显示方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/501,989 Continuation US20220032186A1 (en) 2020-04-23 2021-10-14 Operation control display method and apparatus based on virtual scene

Publications (1)

Publication Number Publication Date
WO2021213021A1 true WO2021213021A1 (zh) 2021-10-28

Family

ID=72187689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079516 WO2021213021A1 (zh) 2020-04-23 2021-03-08 基于虚拟场景的操作控件显示方法及装置

Country Status (9)

Country Link
US (1) US20220032186A1 (zh)
EP (1) EP3943174A4 (zh)
JP (2) JP7331137B2 (zh)
KR (1) KR20210143309A (zh)
CN (1) CN111589128B (zh)
AU (1) AU2021240131B2 (zh)
CA (1) CA3132506A1 (zh)
SG (1) SG11202111645SA (zh)
WO (1) WO2021213021A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109513208B (zh) * 2018-11-15 2021-04-09 深圳市腾讯信息技术有限公司 对象的显示方法、装置、存储介质及电子装置
CN111589128B (zh) * 2020-04-23 2022-02-18 腾讯科技(深圳)有限公司 基于虚拟场景的操作控件显示方法及装置
CN112076468B (zh) * 2020-09-17 2022-07-22 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN112306351B (zh) * 2020-10-30 2022-05-13 腾讯科技(深圳)有限公司 虚拟按键的位置调整方法、装置、设备及存储介质
CN112402960B (zh) * 2020-11-19 2022-11-04 腾讯科技(深圳)有限公司 虚拟场景中状态切换方法、装置、设备及存储介质
CN113144601B (zh) * 2021-05-26 2023-04-07 腾讯科技(深圳)有限公司 虚拟场景中的表情显示方法、装置、设备以及介质
CN113641443B (zh) * 2021-08-31 2023-10-24 腾讯科技(深圳)有限公司 界面元素的显示方法、装置、设备及可读存储介质
CN114415907B (zh) * 2022-01-21 2023-08-18 腾讯科技(深圳)有限公司 媒体资源显示方法、装置、设备及存储介质
CN116726495A (zh) * 2022-03-01 2023-09-12 腾讯科技(深圳)有限公司 基于虚拟环境的交互方法、装置、设备、介质及程序产品
CN117482505A (zh) * 2022-07-25 2024-02-02 腾讯科技(深圳)有限公司 虚拟场景中的交互方法、装置、设备、存储介质及产品
CN115473755A (zh) * 2022-07-29 2022-12-13 青岛海尔科技有限公司 基于数字孪生的智能设备的控制方法、装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253760A1 (en) * 2017-03-02 2018-09-06 Seth TENKORANG Interactive electronic map
CN108744510A (zh) * 2018-05-25 2018-11-06 腾讯科技(深圳)有限公司 虚拟物品显示方法、装置及存储介质
US20190250714A1 (en) * 2012-12-13 2019-08-15 Eyesight Mobile Technologies, LTD. Systems and methods for triggering actions based on touch-free gesture detection
CN110413171A (zh) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN110743168A (zh) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 虚拟场景中的虚拟对象控制方法、计算机设备及存储介质
CN111589128A (zh) * 2020-04-23 2020-08-28 腾讯科技(深圳)有限公司 基于虚拟场景的操作控件显示方法及装置

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5044956A (en) * 1989-01-12 1991-09-03 Atari Games Corporation Control device such as a steering wheel for video vehicle simulator with realistic feedback forces
WO1995020787A1 (en) * 1994-01-27 1995-08-03 Exos, Inc. Multimode feedback display technology
US5999168A (en) * 1995-09-27 1999-12-07 Immersion Corporation Haptic accelerator for force feedback computer peripherals
US7027032B2 (en) * 1995-12-01 2006-04-11 Immersion Corporation Designing force sensations for force feedback computer applications
JP2004057634A (ja) * 2002-07-31 2004-02-26 Shuji Sonoda ゲーム装置及びそれを実現するプログラム
JP2004325828A (ja) * 2003-04-25 2004-11-18 Namco Ltd シミュレータ、プログラム及び情報記憶媒体
US8210943B1 (en) * 2006-05-06 2012-07-03 Sony Computer Entertainment America Llc Target interface
US8834245B2 (en) * 2007-08-17 2014-09-16 Nintendo Co., Ltd. System and method for lock on target tracking with free targeting capability
US8803797B2 (en) * 2008-01-18 2014-08-12 Microsoft Corporation Input through sensing of user-applied forces
JP4545809B2 (ja) 2008-06-05 2010-09-15 株式会社スクウェア・エニックス ゲーム装置及びプログラム
US8920240B2 (en) * 2010-04-19 2014-12-30 Guillemot Corporation S.A. Directional game controller
CA2720892A1 (en) * 2010-11-12 2012-05-12 The Regents Of The University Of California Enhancing cognition in the presence of distraction and/or interruption
US8814674B2 (en) * 2012-05-24 2014-08-26 Supercell Oy Graphical user interface for a gaming system
US20140237408A1 (en) * 2013-02-15 2014-08-21 Flatfrog Laboratories Ab Interpretation of pressure based gesture
EP3120223A1 (en) * 2014-03-21 2017-01-25 Immersion Corporation System, method and computer-readable medium for force-based object manipulation and haptic sensations
JP6796457B2 (ja) 2016-11-09 2020-12-09 株式会社日本マイクロニクス 電気的接続装置、プローブ及び製造方法
JP2018171261A (ja) * 2017-03-31 2018-11-08 株式会社スクウェア・エニックス ゲームプログラム及びシステム
CN107837531B (zh) * 2017-09-28 2018-11-23 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN109589605B (zh) * 2018-12-14 2022-08-05 网易(杭州)网络有限公司 游戏的显示控制方法和装置
US11027194B2 (en) * 2019-03-01 2021-06-08 Sony Interactive Entertainment Inc. Force feedback to improve gameplay
CN110427111B (zh) * 2019-08-01 2022-09-06 腾讯科技(深圳)有限公司 虚拟环境中虚拟道具的操作方法、装置、设备及存储介质
CN111013142B (zh) * 2019-11-19 2021-10-12 腾讯科技(深圳)有限公司 互动效果显示方法、装置、计算机设备及存储介质
CN110955370B (zh) * 2019-12-02 2021-04-20 网易(杭州)网络有限公司 游戏中技能控件的切换方法、装置以及触控终端
US11541317B2 (en) * 2020-02-06 2023-01-03 Sony Interactive Entertainment Inc. Automated weapon selection for new players using AI
US20230341574A1 (en) * 2022-01-18 2023-10-26 Terahertz Ventures Llc System and method for utilizing gravitational waves for geological exploration
US20230245175A1 (en) * 2022-02-03 2023-08-03 Solsten, Inc. Systems and methods to identify a target audience for prospective content based on a taxonomy
US20230343226A1 (en) * 2022-04-22 2023-10-26 Aeris, LLC System for traffic management of unmanned aircraft

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190250714A1 (en) * 2012-12-13 2019-08-15 Eyesight Mobile Technologies, LTD. Systems and methods for triggering actions based on touch-free gesture detection
US20180253760A1 (en) * 2017-03-02 2018-09-06 Seth TENKORANG Interactive electronic map
CN108744510A (zh) * 2018-05-25 2018-11-06 腾讯科技(深圳)有限公司 虚拟物品显示方法、装置及存储介质
CN110413171A (zh) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN110743168A (zh) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 虚拟场景中的虚拟对象控制方法、计算机设备及存储介质
CN111589128A (zh) * 2020-04-23 2020-08-28 腾讯科技(深圳)有限公司 基于虚拟场景的操作控件显示方法及装置

Also Published As

Publication number Publication date
EP3943174A1 (en) 2022-01-26
SG11202111645SA (en) 2021-11-29
KR20210143309A (ko) 2021-11-26
AU2021240131B2 (en) 2023-03-23
EP3943174A4 (en) 2022-07-06
JP7331137B2 (ja) 2023-08-22
CA3132506A1 (en) 2021-10-23
CN111589128A (zh) 2020-08-28
AU2021240131A1 (en) 2021-11-11
US20220032186A1 (en) 2022-02-03
CN111589128B (zh) 2022-02-18
JP2023159212A (ja) 2023-10-31
JP2022532870A (ja) 2022-07-20

Similar Documents

Publication Publication Date Title
WO2021213021A1 (zh) 基于虚拟场景的操作控件显示方法及装置
WO2021218406A1 (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
WO2021227870A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
JP2022526456A (ja) 仮想オブジェクト制御方法並びにその、装置、コンピュータ装置及びプログラム
WO2021213070A1 (zh) 虚拟角色的控制方法、装置、设备及存储介质
WO2021213073A1 (zh) 虚拟形象使用数据的处理方法、装置、设备及存储介质
WO2022121503A1 (zh) 预购道具的显示方法、装置、设备、介质及产品
CN112691366B (zh) 虚拟道具的显示方法、装置、设备及介质
KR20210151856A (ko) 가상 환경 디스플레이 방법 및 장치, 디바이스, 및 저장 매체
JP2023527846A (ja) バーチャルシーンにおけるデータ処理方法、装置、コンピュータデバイス、及びコンピュータプログラム
KR20210151845A (ko) 타깃 가상 객체를 결정하는 방법 및 장치, 단말, 및 저장 매체
JP2023164787A (ja) 仮想環境の画面表示方法、装置、機器及びコンピュータプログラム
WO2022083451A1 (zh) 虚拟对象的技能选择方法、装置、设备、介质及程序产品
CN111265876B (zh) 虚拟环境中的道具使用方法、装置、设备及存储介质
JP2023164687A (ja) 仮想オブジェクトの制御方法及び装置並びにコンピュータ装置及びプログラム
US20230285855A1 (en) Virtual scene display method and apparatus, terminal, and storage medium
US20230033902A1 (en) Virtual object control method and apparatus, device, storage medium, and program product
US20230016383A1 (en) Controlling a virtual objectbased on strength values
CN114307150B (zh) 虚拟对象之间的互动方法、装置、设备、介质及程序产品
CN114225372B (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
KR20200126628A (ko) 실시간 전략전투게임 제공장치 및 방법과 그 프로그램 및 기록매체
US11969654B2 (en) Method and apparatus for determining target virtual object, terminal, and storage medium
CN112843682B (zh) 数据同步方法、装置、设备及存储介质
CN116099199A (zh) 游戏技能的处理方法、装置、计算机设备及存储介质
CN117643723A (zh) 游戏交互方法、装置、计算机设备及计算机可读存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 21786075.8

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20217035221

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021786075

Country of ref document: EP

Effective date: 20211019

ENP Entry into the national phase

Ref document number: 2021565788

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021240131

Country of ref document: AU

Date of ref document: 20210308

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21786075

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE