WO2022042435A1 - 虚拟环境画面的显示方法、装置、设备及存储介质 - Google Patents

虚拟环境画面的显示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022042435A1
WO2022042435A1 PCT/CN2021/113710 CN2021113710W WO2022042435A1 WO 2022042435 A1 WO2022042435 A1 WO 2022042435A1 CN 2021113710 W CN2021113710 W CN 2021113710W WO 2022042435 A1 WO2022042435 A1 WO 2022042435A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
action
type
virtual object
virtual
Prior art date
Application number
PCT/CN2021/113710
Other languages
English (en)
French (fr)
Inventor
林凌云
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227040140A priority Critical patent/KR20230007392A/ko
Priority to JP2022560942A priority patent/JP7477640B2/ja
Publication of WO2022042435A1 publication Critical patent/WO2022042435A1/zh
Priority to US17/883,323 priority patent/US20220379214A1/en
Priority to JP2024067203A priority patent/JP2024099643A/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Definitions

  • the present application relates to the field of human-computer interaction, and in particular, to a method, apparatus, device and storage medium for displaying a virtual environment screen.
  • users can control virtual objects to perform actions such as crouching, lying down, shooting, and running.
  • UI controls User Interface, UI controls
  • Multiple UI controls are distributed on the virtual environment screen according to a certain layout.
  • Each UI control uses In order to realize controlling the virtual object to perform an action, for example, UI control 1 controls the virtual object to perform a squatting action, and UI control 2 controls the virtual object to perform a squatting action.
  • Embodiments of the present application provide a method, apparatus, device, and storage medium for displaying a virtual environment screen, which improve the efficiency of human-computer interaction by changing the layout of UI controls on the virtual environment screen.
  • the technical solution is as follows:
  • a method for displaying a virtual environment screen which is applied to a computer device, and the method includes:
  • displaying a virtual environment screen where the virtual environment screen displays a first control and a second control, and the first control and the second control belong to different control types;
  • the first control and the second control are merged into a third control based on the merge setting operation.
  • a device for displaying a virtual environment picture comprising:
  • a display module configured to display a virtual environment picture, the virtual environment picture is displayed with a first control and a second control, and the first control and the second control belong to different control types;
  • a receiving module configured to receive a combined setting operation for the first control and the second control
  • a processing module configured to merge the first control and the second control into a third control based on the merge setting operation.
  • a computer device comprising: a processor and a memory, wherein the memory stores at least one instruction, at least a piece of program, a code set or an instruction set, the at least one The instructions, the at least one piece of program, the code set or the instruction set are loaded and executed by the processor to implement the method for displaying a virtual environment screen as described above.
  • a computer-readable storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program , the code set or the instruction set is loaded and executed by the processor to implement the method for displaying a virtual environment screen as described in the above aspect.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for displaying a virtual environment screen as described above.
  • the different types of controls displayed on the virtual environment screen are merged, so that the user can merge the less commonly used UI controls into the same UI control through independent settings, or merge the UI controls that need to be used in combination.
  • the layout of the UI control on the virtual environment screen is simplified by merging the UI control, thereby simplifying the process of the user controlling the virtual object and improving the human-computer interaction efficiency.
  • FIG. 1 is a block diagram of a computer system provided by an exemplary embodiment of the present application.
  • FIG. 2 is a schematic diagram of a state synchronization technology provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of a frame synchronization technology provided by an exemplary embodiment of the present application.
  • FIG. 4 is a flowchart of a method for displaying a virtual environment screen provided by an exemplary embodiment of the present application
  • FIG. 5 is a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment of the present application.
  • FIG. 6 is a schematic diagram of a virtual environment screen before merging controls provided by an exemplary embodiment of the present application
  • FIG. 7 is a schematic diagram of a setting interface corresponding to a merge setting operation provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram of an updated virtual environment screen provided by an exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment of the present application.
  • FIG. 10 is a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment of the present application.
  • FIG. 11 is a schematic diagram of a split control provided by an exemplary embodiment of the present application.
  • FIG. 12 is a schematic diagram of an updated virtual environment screen provided by another exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment of the present application.
  • 15 is a flowchart of a method for judging the state of a virtual object provided by an exemplary embodiment of the present application.
  • 16 is a block diagram of a display device for a virtual environment screen provided by an exemplary embodiment of the present application.
  • FIG. 17 is a schematic diagram of an apparatus structure of a computer device provided by an exemplary embodiment of the present application.
  • Virtual environment is the virtual environment displayed (or provided) by the application when it is run on the terminal.
  • the virtual environment may be a simulated environment of the real world, a semi-simulated and semi-fictional environment, or a purely fictional environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application.
  • the following embodiments are exemplified by the virtual environment being a three-dimensional virtual environment.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as characters, animals, plants, oil barrels, walls, stones, etc. displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional solid model created based on animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • a virtual object generally refers to one or more virtual objects in a virtual environment.
  • UI control refers to any visual control or element that can be seen on the user interface of an application, such as pictures, input boxes, text boxes, buttons, labels, etc., some of which UI controls respond to The user's operation, for example, the user can input text in the input box, and the user interacts with the user interface through the above-mentioned UI controls.
  • the method provided in this application can be applied to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games (First-Person Shooting Game, FPS) games, multiplayer online tactical competitive games (Multiplayer Online Battle Arena Games, MOBA), battle royale type shooting games, virtual reality (Virtual Reality, VR) applications, augmented reality (Augmented Reality, AR), etc.
  • the following embodiments are examples of applications in games.
  • a game based on a virtual environment consists of one or more maps of the game world.
  • the virtual environment in the game simulates the real world scene, and the user can manipulate the virtual objects in the game to walk, run, jump, shoot, fight, Driving, being attacked by other virtual objects, being injured in the virtual environment, attacking other virtual objects, using disruptive throwing props, rescuing teammates in the same team, etc., are highly interactive, and multiple users can form a team online.
  • Competitive Games A virtual environment picture corresponding to the game is displayed on the terminal used by the user, and the virtual environment picture is obtained by observing the virtual environment from the perspective of a virtual object controlled by the user.
  • a plurality of UI controls are displayed on the virtual environment screen to form a user interface, and each UI control is used to control the virtual object to perform different actions. For example, the user triggers UI control 1 to control the virtual object to run forward.
  • FIG. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a first terminal 120 , a server 140 and a second terminal 160 .
  • the first terminal 120 has an application program supporting a virtual environment installed and running.
  • the first terminal 120 is a terminal used by the first user, and the first user uses the first terminal 120 to control the first virtual object located in the virtual environment to perform activities, including but not limited to: adjusting body posture, walking, running, jumping, At least one of riding, aiming, picking up, using throwing props, and attacking other virtual objects.
  • the first virtual object is a first virtual character, such as a simulated character object or an anime character object.
  • the first terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform and a virtualization center.
  • the server 140 includes a processor 144 and a memory 142, and the memory 142 further includes a receiving module 1421, a control module 1422 and a sending module 1423.
  • the receiving module 1421 is used to receive a request sent by a client, such as a team request; the control module 1422 It is used to control the rendering of the virtual environment screen; the sending module 1423 is used to send a response to the client, such as sending a prompt message of successful team formation to the client.
  • the server 140 is used to provide background services for applications supporting a three-dimensional virtual environment.
  • the server 140 undertakes the main computing work, and the first terminal 120 and the second terminal 160 undertake the secondary computing work; or, the server 140 undertakes the secondary computing work, and the first terminal 120 and the second terminal 160 undertake the main computing work; Alternatively, the server 140 , the first terminal 120 and the second terminal 160 use a distributed computing architecture to perform collaborative computing.
  • the server 140 may adopt a synchronization technology to make the display performance of multiple clients consistent.
  • the synchronization technology adopted by the server 140 includes: a state synchronization technology or a frame synchronization technology.
  • the server 140 uses state synchronization technology to synchronize among multiple clients. As shown in FIG. 2 , the battle logic runs in the server 140 . When a state of a virtual object in the virtual environment changes, the server 140 sends the state synchronization result to all clients, such as client 1 to client 10 .
  • the client 1 sends a request to the server 140, the request is used to request the virtual object 1 to perform an action of attacking the virtual object 2, the server 140 determines whether the virtual object 1 can attack the virtual object 2, and when the virtual object 1 executes the action of attacking the virtual object 2, the server 140 determines whether the virtual object 1 can attack the virtual object 2.
  • the remaining health of virtual object 2 after the attack action The server 140 synchronizes the remaining life value of the virtual object 2 to all clients, and all the clients update local data and interface performance according to the remaining life value of the virtual object 2 .
  • the server 140 uses frame synchronization technology to synchronize among multiple clients.
  • the battle logic runs in each client.
  • the client sends a frame synchronization request to the server, and the frame synchronization request carries the local data changes of the client.
  • the server 140 forwards the frame synchronization request to all clients.
  • each client receives the frame synchronization request, it processes the frame synchronization request according to the local battle logic, and updates the local data and interface performance.
  • the second terminal 160 has an application program supporting a virtual environment installed and running.
  • the second terminal 160 is a terminal used by the second user, and the second user uses the second terminal 160 to control the second virtual object located in the virtual environment to perform activities, including but not limited to: adjusting body posture, walking, running, jumping, At least one of riding, aiming, picking up, using throwing props, and attacking other virtual objects.
  • the second virtual object is a second virtual character, such as a simulated character object or an anime character object.
  • first virtual object and the second virtual object are in the same virtual environment.
  • first virtual object and the second virtual object may belong to the same team, the same organization, the same camp, have a friend relationship or have temporary communication rights; or, the first virtual character object and the second virtual character Objects can also belong to different factions, different teams, different organizations, or have an adversarial relationship.
  • the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms (Android or IOS).
  • the first terminal 120 may generally refer to one of multiple terminals
  • the second terminal 160 may generally refer to one of multiple terminals.
  • This embodiment only takes the first terminal 120 and the second terminal 160 as examples for illustration.
  • the device types of the first terminal 120 and the second terminal 160 are the same or different, and the device types include: smart phones, tablet computers, e-book readers, MP3 players, MP4 players, laptop computers and desktop computers. at least one.
  • the following embodiments take the terminal including a smart phone as an example for illustration.
  • the number of the above-mentioned terminals may be more or less.
  • the above-mentioned terminal may be only one, or the above-mentioned terminal may be dozens or hundreds, or more.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • FIG. 4 shows a flowchart of a method for displaying a virtual environment screen provided by an exemplary embodiment of the present application.
  • the method can be applied to a computer device, and the computer device is implemented as the first terminal 120 or the first terminal as shown in FIG. 1 .
  • the two terminals 160 or other terminals in the computer system 100 are taken as an example for description.
  • the method includes the following steps:
  • Step 401 displaying a virtual environment screen and a first control and a second control, where the first control and the second control belong to different control types.
  • the first control is used to control the first virtual object to perform the first action
  • the second control is used to control the first virtual object to perform the second action.
  • the first action and the second action belong to different types of actions; or, the first action and the second action are different actions.
  • the first control is used to control the first virtual object to perform a first performance in the virtual environment; alternatively, the first control is used to control the first virtual object to use the first prop in the virtual environment; The first control is used to control the first virtual object to trigger the first skill; or, the first control is used to control the first virtual object to be in the first motion state.
  • the second action corresponding to the second control includes the second performance, the use of the second prop, the triggering of the second skill, the second movement state, etc.
  • the control functions of the first control and the second control are not limited in this embodiment. .
  • the terminal used by the user runs an application that supports the virtual environment, such as a first-person shooter game.
  • an application that supports the virtual environment
  • a virtual environment picture is displayed, and the virtual environment picture is a picture obtained by observing the virtual environment from the perspective of the first virtual object.
  • the virtual environment displayed on the virtual environment screen includes: at least one element of mountains, flats, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
  • the control type of the UI control is mainly used to indicate the type of function triggered by the UI control.
  • UI controls are displayed on the virtual environment screen.
  • the UI controls include: at least one of auxiliary-type UI controls, mobile-type UI controls, aiming-type UI controls, and state-switching-type UI controls.
  • Auxiliary-type UI controls are used to assist virtual objects to perform activities.
  • auxiliary-type UI controls are used to control virtual objects to use auxiliary-type virtual props to assist in activities, or auxiliary-type UI controls are used to control
  • the virtual object triggers auxiliary type skills to assist activities.
  • the open mirror control belongs to the auxiliary type UI control, which is used to control the virtual object to use the scope prop to aim at the target during shooting activities; the mobile type UI control is used to control the virtual object.
  • the direction movement control belongs to the movement type UI control.
  • the aiming type UI control is the UI control corresponding to the virtual object when using the virtual prop.
  • the aiming-type UI control is the UI control corresponding to the virtual object using the attacking prop.
  • the shooting control belongs to the aiming-type UI control.
  • the control is triggered, the virtual object shoots at the target; the UI control of the state switching type is used to switch the posture of the virtual object in the virtual environment.
  • the squatting control belongs to the UI control of the state switching type.
  • the virtual The object switches from standing to squatting state, or from other postures to squatting state.
  • Step 402 Receive a combined setting operation for the first widget and the second widget.
  • UI controls such as a smart phone or a tablet computer
  • the user implements the merge setting operation by triggering a UI control corresponding to the merge setting operation, or the user performs a gesture operation corresponding to the merge setting operation on the touch screen display, such as a single-click operation, a long-press operation, a double-click operation.
  • Operations including at least one of a single-finger double-click operation and a multi-finger double-click operation), hovering operations, dragging operations, and their combined operations, etc.
  • the merge setting operation can also be performed through the external input device.
  • the terminal is implemented as a notebook computer connected with a mouse, and the user moves the mouse pointer to the UI control corresponding to the merge setting operation, and performs the merge setting operation by clicking the mouse.
  • the user can also perform the merge setting operation by pressing the keyboard keys and clicking the mouse.
  • a UI control corresponding to the merge setting operation is displayed separately on the virtual environment screen, and the UI control is named as a merge UI control; in other embodiments, the virtual environment screen includes a setting page for setting the game application, The setting page includes UI controls corresponding to the merge setting operation.
  • Step 403 in response to the merge setting operation, merge the first control and the second control into a third control.
  • the third control is used to control the first virtual object to perform a first action (corresponding to the first control) and a second action (corresponding to the second control).
  • the third control is used to control the first virtual object to perform a third action independent of the first action and the second action.
  • Merging refers to synthesizing at least two controls into one control, and only the merged control is displayed on the virtual environment screen, and the merged control has the functions of the control before the merge.
  • the third widget is displayed on the virtual environment screen, and the display of the first widget and the second widget is canceled.
  • the third control has both the function corresponding to the first control and the function corresponding to the second control, and the user can control the first virtual object to perform the first action and the second action by triggering the third control.
  • the first virtual object when the user clicks the third control, the first virtual object performs the first action; when the user presses the third control for a long time, the first virtual object performs the second action; or, when the user clicks the third control, the first virtual object performs the second action.
  • a virtual object performs the first action, and when the user clicks the third control again, the game application determines that the first virtual object is performing the first action, and controls the first virtual object to perform the second action while performing the first action; Or, when the user clicks the third control, the first virtual object performs the first action, and when the user clicks the third control again, the game application determines that the first virtual object has performed the first action (the first action has been performed), Then the first virtual object is controlled to perform the second action.
  • the UI control corresponds to a control identifier in the application, and when receiving the merge setting operation, determines the corresponding control identifier of the first control and the second control, and based on the corresponding control identifier of the first control and the second control
  • the control identifier corresponding to the third control is determined, and when the interface is rendered, the third control is rendered according to the control identifier of the third control, and the rendering of the first control and the second control is canceled.
  • the user performs a merge setting operation to merge at least two controls.
  • the method provided in this embodiment merges different types of controls displayed on the virtual environment screen through the received merge setting operation, so that the user can merge the UI controls that are not commonly used into the same one through independent settings.
  • UI controls, or combining UI controls that need to be used in combination into one UI control by changing the layout of UI controls on the virtual environment screen, simplifies the process of users controlling virtual objects and improves the efficiency of human-computer interaction.
  • At least two controls are merged by performing a merge setting operation on the virtual environment screen, or one control is split into at least two controls by performing a split setting operation on the virtual environment screen.
  • merging controls and the process of splitting controls will be described in combination with the user interface in the game application.
  • FIG. 5 shows a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment of the present application.
  • the method can be applied to a computer device implemented as the first terminal 120 or the second terminal 160 as shown in FIG. 1 , or other terminals in the computer system 100 .
  • the method includes the following steps:
  • Step 501 displaying a virtual environment picture, where a first control and a second control are displayed on the virtual environment picture, and the first control and the second control belong to different control types.
  • the first control is used to control the first virtual object to perform the first action
  • the second control is used to control the first virtual object to perform the second action.
  • a first control 11 and a second control 12 are displayed on the virtual environment screen.
  • the first control 11 is a squat control
  • the second control 12 is an aiming control
  • the first control 11 belongs to a state switching type control.
  • the second control 12 is a mobile control
  • the first control 11 and the second control 12 are located on the right side of the virtual environment screen.
  • Step 502 Receive a combined setting operation for the first widget and the second widget.
  • the merge setting operation is implemented as dragging the controls that need to be merged to the same place.
  • the user drags the first control 11 to the second control 12 , or the user drags the second control 12 to the first control 11 .
  • the merge setting operation is an operation enabled by the user in the setting interface, and the user places the control 20 corresponding to the merge setting operation in the open state, then the first control 11 can be set in the game application program. Merge with the second control 12.
  • Step 503 Obtain a first widget type of the first widget and a second widget type of the second widget.
  • the game application acquires the control types of the first control and the second control according to the control selected by the user when dragging.
  • control type of the control to be merged is acquired according to the user's operation on the setting interface.
  • the user's operation on the setting interface is used to combine all controls of the same type, or to combine the controls of a preset type (for example, combining the controls of the shooting type and the controls of the movement type) ), or, to combine the preset controls (for example, combine the squat controls and the down controls).
  • Step 504 in response to the first widget type and the second widget type satisfying the preset condition, merge the first widget and the second widget into a third widget.
  • the preset conditions include at least one of the following conditions:
  • the first control type is an auxiliary type, and the second control type is an aiming type.
  • the first control is a mirror-opening control (a control for opening the scope of a firearm-type virtual prop), and the second control is a shooting control;
  • the first control type is an auxiliary type
  • the second control type is a mobile type
  • the first control is a mirror-opening control
  • the second control is a directional movement control (including a left-moving control, a right-moving control, and a forward-moving control). , move the control back);
  • the first control type is a movement type, and the second control type is an aiming type; schematically, the first control is a directional movement control, and the second control is a shooting control; the first control is a directional movement control, and the second control is a throwing control ( controls for using throwing virtual props);
  • the first control type is a movement type
  • the second control type is a state switching type
  • the first control is a directional movement control
  • the second control is a squat control
  • the first control type is a state switching type
  • the second control type is an auxiliary type
  • the first control is a lying down control (a control for controlling the virtual object to lie down)
  • the second control is a mirror-on control
  • the first control type is a state switching type
  • the second control type is an aiming type.
  • the first control is a crouch control
  • the second control is a shooting control.
  • the game application merges the first control and the second control.
  • Step 505 updating and displaying a third widget for replacing the first widget and the second widget.
  • the third widget is updated and displayed on the virtual environment screen, and the updated virtual environment screen does not include the first widget and the second widget.
  • the game application program When the user performs the merge setting operation, the game application program identifies the user account corresponding to the merge setting operation, and the game application program updates the virtual environment screen corresponding to the user account according to the user account.
  • a third control 13 is displayed on the updated virtual environment screen.
  • the control identifier of the first control 11 shown in FIG. 6 is used as the current updated third control
  • the control identification of 13 is shown as an example; alternatively, or the control identification of the second control is used as the control identification of the third control; or a new control identification is generated as the third control, and the control identification is different from the control identification of the first control.
  • the action performed by the first virtual object is related to the operation received by the third control.
  • Step 507a in response to the first operation on the third control, controlling the first virtual object to perform the first action.
  • the first operation includes at least one of a single-click operation, a long-press operation, a sliding operation, a hovering operation, a drag operation, a double-click operation (including at least one of a single-finger double-click and a multi-finger double-click), and their combined operations .
  • the first virtual object in response to receiving a long-press operation on the third control, is controlled to perform a running action.
  • Step 508a in response to the second operation on the third control, controlling the first virtual object to perform the second action.
  • the second operation includes at least one of single-click operation, long-press operation, slide operation, hover operation, drag operation, double-click operation (including at least one of single-finger double-click and multi-finger double-click), and their combined operations .
  • the first operation is different from the second operation.
  • the first virtual object in response to receiving a double-click operation on the third control, is controlled to perform a mirror-opening action.
  • the third control generates a control instruction according to the received operation type, and controls the first virtual object to perform different actions.
  • Step 507a may be performed prior to step 507b, or step 507b may be performed prior to step 507a.
  • the third control receives an execution operation corresponding to action b, and the first virtual object performs action b while performing action a.
  • the first virtual object in response to receiving a double-click operation on the third control, the first virtual object performs a running action.
  • the first virtual object in response to receiving a long-pressing action on the third control, the first virtual object runs Perform an open-scope action while running.
  • the first virtual object simultaneously executes the action corresponding to the third control.
  • Step 507b in response to the third operation on the third control, controlling the first virtual object to perform the first action and the second action simultaneously.
  • the third operation includes at least one of single-click operation, long-press operation, slide operation, hover operation, drag operation, double-click operation (including at least one of single-finger double-click and multi-finger double-click) and their combined operations .
  • the game application in response to the third control receiving a drag operation, controls the first virtual object to perform a running action and a reloading action at the same time.
  • the first virtual object executes the action according to the priority of the action.
  • Step 507c in response to the fourth operation on the third control, obtain the priorities of the first action and the second action.
  • the fourth operation includes at least one of single-click operation, long-press operation, slide operation, hover operation, drag operation, double-click operation (including at least one of single-finger double-click and multi-finger double-click) and their combined operations .
  • the game application in response to receiving a long-press operation on the third control, obtains the priority of the action corresponding to the third control.
  • priority ordering running action>shooting action (or throwing action)>squatting action (or lying down action)>mirror-opening action (reload action).
  • Step 508c controlling the first virtual object to perform the first action and the second action in a preset order based on the priority.
  • the game application controls the first virtual object to perform the second action first, and then perform the first action.
  • the method provided in this embodiment merges different types of controls displayed on the virtual environment screen through the received merge setting operation, so that the user can merge the UI controls that are not commonly used into the same one through independent settings.
  • UI controls, or combining UI controls that need to be used in combination into one UI control by changing the layout of UI controls on the virtual environment screen, simplifies the process of users controlling virtual objects and improves the efficiency of human-computer interaction.
  • the first control and the second control are merged into a third control, so that the user can combine different types of controls through the merge setting operation. Combined, so that the layout of UI controls on the virtual environment screen is more flexible.
  • the user can determine the types of UI controls that can be merged, and the UI controls can be flexibly merged.
  • the virtual environment screen is updated and displayed, and the updated virtual environment screen displays the merged third control, and the user can control the virtual object more intuitively through the updated virtual environment screen.
  • the virtual object is controlled to perform different actions according to different rules, which makes the way of controlling the virtual object more flexible and diverse, and helps users to set UI controls that suit their own preferences or usage habits. Layout.
  • the process of splitting controls includes the following three situations, as shown in Figure 10:
  • the split controls belong to different types of controls.
  • the third widget is split into the first widget and the second widget.
  • Step 511a in response to the first split setting operation for the third control, obtain the action type corresponding to the third control.
  • the split setting operation may be opposite to the implementation of the merge setting operation.
  • the merge setting operation is a leftward drag operation
  • the split setting operation is a right drag operation;
  • the merge setting operation is to drag the first control to the second control to form the third control, then the split setting operation starts from the third control 13, and drags the control outward from the third control 13 as the first control 11 Or the second control 12, the arrow indicates the dragging direction, as shown in FIG. 11 .
  • the action type corresponds to the control type of the control.
  • the control type of control 1 is an auxiliary type, and the virtual object performs an auxiliary type action when control 1 is triggered.
  • control 1 is a mirror-opening control
  • the virtual object performs the mirror-opening action
  • the mirror-opening action The action type is an auxiliary type.
  • the third control Since the third control is a merged control, the third control has the functions of at least two controls.
  • the game application acquires an action list corresponding to the third control, where the action list is used to provide the control composition of the third control.
  • the game application or the background server establishes an action list corresponding to the merged control, and binds the action list to the merged control.
  • the control identifier of the third control and the control identifiers corresponding to at least two controls having a split relationship with the third control are stored in the action list, so that when the third control needs to be split, it is cancelled during the interface rendering process.
  • the rendering of the third control is replaced by the rendering of at least two controls in the list corresponding to the control identifiers of the third control.
  • Step 512a splitting the third control into a first control and a second control based on the action type.
  • the action list corresponding to the third control includes a squatting action and a mirror-opening action, and the third control is split into a squatting control and a mirror-opening control.
  • the split controls belong to the same type of controls.
  • the third widget is split into a fourth widget and a fifth widget, the fourth widget and the fifth widget belong to the same type of widget.
  • Step 511b in response to the second split setting operation for the third control, obtain an association relationship between at least two actions corresponding to the third control.
  • the association relationship refers to the hierarchical relationship between actions performed by the virtual object when the third control is triggered.
  • the action corresponding to the third control is a posture switching action, and there is a hierarchical relationship between each posture switching action.
  • the virtual object is controlled to perform a full squat action (the knees are bent and the legs are close to each other). hips) and half squat movements (on one knee) to obtain the hierarchical relationship between full squat movements and half squat movements.
  • Step 512b splitting the third control into a fourth control and a fifth control based on the association relationship.
  • the third control is split into a full squat motion and a half squat motion based on the association relationship; in another example, the game application divides the third control into a squat control and a prone control according to the association relationship; In another example, the game application splits the third control into an action of opening a high-power scope and an action of opening a low-power scope according to an association relationship.
  • the virtual environment picture is a picture obtained by observing the virtual environment from the perspective of the first virtual object.
  • a camera model is bound to the first virtual object, and the virtual environment is photographed through the camera model to obtain the virtual environment picture.
  • the third control is split based on the number of virtual objects in the virtual environment screen.
  • the third control is split based on the number of the second virtual objects.
  • Step 511c in response to that the viewing angle of the first virtual object includes at least two second virtual objects, obtain the number of the second virtual objects.
  • the first virtual object and the second virtual object are in the same virtual environment, and the first virtual object will see the second virtual object in the virtual environment during the activity of the virtual environment.
  • the second virtual object has a teammate relationship with the first virtual object.
  • the viewing angle of the first virtual object includes three second virtual objects, and the game application binds the number of the second virtual objects to the virtual environment picture observed by the first virtual object.
  • Step 512c splitting the third control based on the second virtual quantity.
  • the third control is divided into three controls according to the three virtual objects.
  • the third control is divided into control 1, control 2 and control 3.
  • Control 1 is used to attack virtual object 1
  • control 2 is used to attack virtual object 2
  • control 3 is used to attack virtual object 3.
  • the third control in response to the second virtual object using the virtual prop, is split based on the second virtual object and the virtual prop used by the second virtual object.
  • the virtual prop used by the second virtual object is a shield (used to reduce damage to the virtual object)
  • the third control is split into two controls, and one control is used to destroy the virtual prop (shield) used by the second virtual object.
  • a control for attacking the second virtual object In some embodiments, the controls for destroying the virtual item last longer than the controls for attacking the second virtual object.
  • Step 513 update and display at least two split controls that are used to replace the third control after the third control is split.
  • At least two split widgets corresponding to the split third widget are updated and displayed, and the updated virtual environment picture does not include the third widget.
  • the game application When the user performs the split setting operation, the game application identifies the user account corresponding to the split setting operation, and the game application updates the virtual environment screen corresponding to the user account according to the user account.
  • the split fourth control 14 and the fifth control 15 are displayed on the updated virtual environment screen.
  • the fourth control 14 (full squat control) and the fifth control 15 (half squat control) are The resulting split from the squat control 13.
  • the squatting control 13 is not displayed on the updated virtual environment screen (the figure is for illustration only).
  • control identification of the fourth control 14 and the control identification of the fifth control 15 are newly generated control identifications, and these two control identifications are different from the control identification of the third control 13; or the control identification of the third control 13
  • the identification is the control identification of the fourth control 14, and the control identification of the fifth control 15 is the newly generated control identification; or the control identification of the third control 13 is used as the control identification of the fifth control 15, and the control identification of the fourth control 14 is The newly generated control ID.
  • the first virtual object performs at least two actions simultaneously.
  • Step 516a in response to the first operation on the first control, controlling the first virtual object to perform the first action.
  • the control type of the first control is different from the control type of the second control.
  • Step 517a in response to the second operation on the second control, controlling the second virtual object to perform the second action while performing the first action.
  • the game application program controls the first virtual object to throw the virtual object during the squatting process. props. The user can control the first virtual object to accurately hit the target according to the squatting angle.
  • the first virtual object performs at least two actions in sequence.
  • Step 516b in response to the third operation on the fourth control, controlling the first virtual object to perform a fourth action.
  • the fourth control is a lying down control
  • the game application program controls the first virtual object to perform a lying down action
  • Step 517b in response to the fourth operation on the fifth control, controlling the first virtual object to perform a fifth action, where the fourth action and the fifth action have an associated relationship.
  • association relationship means that there is a certain hierarchical relationship between actions, and they belong to the same type of action.
  • the fifth control is a crawl control (for controlling the first virtual object to crawl forward), and in response to receiving a double-click operation on the crawl control, the game application program controls the first virtual object to crawl in the virtual environment. It can be understood that step 516b may be performed prior to step 517b, and step 517b may be performed prior to step 516b.
  • the third control is split into the same type of action or different types of controls, so that the user can split it in a targeted manner according to different battle situations
  • the controls while conforming to the user's operating habits, ensure that the virtual objects controlled by the user improve the battle efficiency.
  • the split control can realize functions corresponding to different types of actions, and it is convenient for users to split the third control according to different functions. .
  • the split control By splitting the third control according to the association relationship between at least two actions corresponding to the third control when it is triggered, the split control can realize different levels of similar actions, which is convenient for users to perform different actions according to different combat situations.
  • the action of the level is conducive to improving the combat efficiency of virtual objects.
  • the split control can attack different virtual objects, which is beneficial to improve the combat efficiency of the virtual objects.
  • the virtual environment screen is updated and displayed, the updated virtual environment screen displays the split controls, and the user can control the virtual object more intuitively through the updated virtual environment screen.
  • the virtual objects can be controlled to perform different operations, which makes the methods of controlling virtual objects more flexible and diverse, and helps users to set the UI controls that suit their own preferences or usage habits. layout.
  • control merging includes the following steps, as shown in Figure 14:
  • Step 1402 whether the user performs a merge setting operation.
  • step 1403a When the user performs the merge setting operation, go to step 1403a; when the user does not perform the merge setting operation, go to step 1403b.
  • Step 1403a the duration of triggering the lie down control exceeds the time threshold.
  • the game application merges the first control and the second control to obtain the third control.
  • the first control is a crouching control
  • the second control is a lying down control.
  • the control identification of the first control (crouching control) is used as the control identification of the third control
  • the control identification of the second control (squatting control) is hidden and displayed.
  • the time threshold is 0.5 seconds, and the trigger operation received on the third control exceeds 0.5 seconds.
  • Step 1404a hide and display the lying down control.
  • the execution condition of the lying down action is satisfied, and the original second control (the lying down control) is not displayed on the virtual environment screen.
  • Step 1405a enable the prone state prompt of the squat control.
  • the merged third control has two functions of supporting squatting action and lying down action.
  • the user is prompted in the form of a squat control highlighted, and the squat control at this time is used to control the first virtual object to perform a squat action.
  • the squat control is used to control the first virtual object to perform a squat action.
  • Step 1403b close the long-press squat control to trigger the squat operation.
  • the first control crouching control
  • the second control craying control
  • Step 1404b display the lie down control.
  • Step 1405b turning off the prone state prompt of the squatting control.
  • the squatting control is used to control the first virtual object to perform the squatting action
  • the lying down control is used to control the first virtual object to perform the squatting action.
  • the functions of the two controls are not combined.
  • steps 1402 to 1405a can be performed in a loop until the end of the game.
  • the game application determines whether the user's operation is to control the first virtual object to perform a squatting action or a squatting action, including the following steps, as shown in FIG. 15 :
  • Step 1501 start.
  • Step 1502 whether to trigger the merged control.
  • the game application merges the first control and the second control to obtain the third control.
  • the user triggers the third control go to step 1503a, and when the user does not trigger the third control, go to step 1503b.
  • the third control is triggered, and the game application records the initial time and the duration T corresponding to the triggering process.
  • Step 1503b the virtual environment picture does not change.
  • the third control is not triggered, and the virtual environment screen does not change.
  • Step 1505 whether the user has let go.
  • the game application program determines whether the user has let go, and if the user has let go, go to step 1506; if the user does not let go, go back to step 1504 (continue to calculate the duration).
  • Step 1506 determine whether ⁇ t>0.5 is established.
  • the game application program determines whether the trigger duration of the third control is greater than the time threshold, if the trigger duration of the third control is greater than the time threshold, go to step 1507a; if the trigger duration of the third control is not greater than the time threshold, go to step 1507b.
  • Step 1507a trigger the squat action.
  • the time period when the third control is triggered is greater than the time threshold, and the game application program controls the first virtual object to perform a squatting action.
  • Step 1507b triggering the lying down action.
  • the time period when the third control is triggered is greater than the time threshold, and the game application program controls the first virtual object to perform a squatting action.
  • the current option setting will be recorded in the memory, where the primary key value is SettingKeys.HideProneBtn, and the value content is the status of the current option (true or false).
  • This part of the information will be stored in the local data of the computer device corresponding to the user.
  • the value of this option will be obtained from the local data to maintain Consistency of user action settings.
  • steps 1502 to 1507a can be performed in a loop until the end of the game.
  • the foregoing embodiments describe the foregoing method based on an application scenario of a game, and the foregoing method is exemplarily described below with an application scenario of military simulation.
  • Simulation technology is a model technology that reflects system behavior or process by simulating real-world experiments using software and hardware.
  • the military simulation program is a program specially constructed for military applications by using simulation technology to quantitatively analyze the combat elements such as sea, land, and air, the performance of weapons and equipment, and combat operations, and then accurately simulate the battlefield environment, present the battlefield situation, and realize the integration of the combat system. Evaluation and decision-making aids.
  • soldiers build a virtual battlefield at the terminal where the military simulation program is located, and compete in teams.
  • Soldiers control virtual objects in the virtual environment of the battlefield to stand, squat, sit, lie down, lie down, lie on the side, walk, run, climb, drive, shoot, throw, attack, wound, reconnaissance, close At least one of the actions such as body fighting.
  • the virtual environment of the battlefield includes: at least one natural form among flats, mountains, plateaus, basins, deserts, rivers, lakes, oceans, and vegetation, as well as location forms such as buildings, vehicles, ruins, and training grounds.
  • the virtual objects include: virtual characters, virtual animals, cartoon characters, etc. Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • soldier A controls virtual object a
  • soldier B controls virtual object b
  • soldier C controls virtual object c
  • soldier A and soldier B are soldiers in the same team
  • soldier C is the same as soldier A
  • soldier C B does not belong to the same team.
  • Seeing virtual object a and virtual object b from the perspective of virtual object c soldier c sets up a split operation in the military simulation program, and splits the shooting control into two shooting controls, namely shooting control 1 and shooting control 2,
  • Shooting control 1 is used to attack virtual object 1
  • shooting control 2 is used to attack virtual object 2, wherein when shooting control 1 and shooting control 2 are triggered, the duration of shooting control 1 is longer than that of shooting control 2 (virtual object). 1 Wearing protective virtual props, so it takes longer to attack).
  • the above-mentioned display method of the virtual environment screen is applied in the military simulation program, and the soldiers combine the tactical layout to merge the controls or split the controls, so that the layout of the controls is more in line with their own usage habits, thereby improving the The human-computer interaction efficiency of the soldiers is improved, and a more realistic simulation of the actual combat scene is carried out, so that the soldiers can get better training.
  • FIG. 16 shows a schematic structural diagram of a display device for a virtual environment screen provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of the terminal through software, hardware or a combination of the two, and the device includes:
  • the display module 1610 is used to display the virtual environment picture and the first control and the second control, the first control and the second control belong to different control types;
  • a receiving module 1620 configured to receive a merge setting operation for the first control and the second control
  • the processing module 1630 is configured to merge the first control and the second control into a third control in response to the merge setting operation.
  • the apparatus includes an obtaining module 1640;
  • the obtaining module 1640 is configured to obtain the first widget type of the first widget and the second widget type of the second widget in response to the merge setting operation; the processing module 1630 is configured to respond to the first widget type and the second widget type.
  • the second control type satisfies the preset condition, and the first control and the second control are combined into a third control.
  • the preset condition includes at least one of the following conditions:
  • the first control type is an auxiliary type, and the second control type is an aiming type;
  • the first control type is an auxiliary type, and the second control type is a mobile type;
  • the first control type is a movement type
  • the second control type is an aiming type
  • the first control type is a mobile type, and the second control type is a state switching type;
  • the first control type is a state switching type
  • the second control type is an auxiliary type
  • the first control type is a state switching type
  • the second control type is an aiming type
  • the display module 1610 is configured to update and display the third control for replacing the first control and the second control.
  • the first control is used to control the first virtual object to perform the first action
  • the second control is used to control the first virtual object to perform a second action
  • the third control is used to control the first virtual object to perform the first action and the second action.
  • the processing module 1630 is configured to, in response to the first operation on the third control, control the first virtual object to perform the first action; in response to the second operation on the third control, control The first virtual object performs the second action; or, in response to the third operation for the third control, the first virtual object is controlled to perform the first action and the second action simultaneously; or, in response to the fourth operation for the third control, Acquire the priority of the first action and the second action; control the first virtual object to execute the first action and the second action in a preset order according to the priority.
  • the processing module 1630 is configured to split the third widget into a first widget and a second widget in response to a first split setting operation for the third widget; or, In response to the second split setting operation for the third widget, the third widget is split into a fourth widget and a fifth widget, the fourth widget and the fifth widget belong to the same type of widget.
  • the obtaining module 1640 is configured to obtain the action type corresponding to the third control in response to the first split setting operation for the third control; the processing module 1630 is configured to The third control is split into a first control and a second control based on the action type.
  • the obtaining module 1640 is configured to obtain the association relationship between at least two actions corresponding to the third control in response to the second split setting operation for the third control;
  • the processing module 1630 is used to split the third control into a fourth control and a fifth control based on the association relationship.
  • the virtual environment picture is a picture obtained by observing the virtual environment from the perspective of the first virtual object
  • the obtaining module 1640 is configured to obtain the number of the second virtual objects in response to the first virtual object including at least two second virtual objects in the viewing angle; the processing module 1630 is configured to determine the number of the second virtual objects based on the The third control is split.
  • the display module 1610 is configured to update and display at least two split controls that are used to replace the third control after the third control is split.
  • the processing module 1630 is configured to, in response to a first operation on the first control, control the first virtual object to perform a first action; in response to a second operation on the second control, control The first virtual object performs the second action while performing the first action; or, in response to the third operation for the fourth control, controls the first virtual object to perform the fourth action; in response to the fourth operation for the fifth control , the first virtual object is controlled to perform the fifth action, and the fourth action and the fifth action have an associated relationship.
  • the device provided in this embodiment merges different types of controls displayed on the virtual environment screen through the received merge setting operation, so that the user can merge the UI controls that are not commonly used into the same one through independent settings.
  • UI controls, or combining UI controls that need to be used in combination into one UI control by changing the layout of UI controls on the virtual environment screen, simplifies the process of users controlling virtual objects and improves the efficiency of human-computer interaction.
  • the first control and the second control are combined into a third control, so that the user can combine different types of The controls are merged, so that the layout of UI controls on the virtual environment screen is more flexible.
  • the user can determine the types of UI controls that can be merged, and the UI controls can be flexibly merged.
  • the virtual environment screen is updated and displayed, and the updated virtual environment screen displays the merged third control, and the user can control the virtual object more intuitively through the updated virtual environment screen.
  • the virtual object is controlled to perform different actions according to different rules, which makes the way of controlling the virtual object more flexible and diverse, and helps users to set UI controls that suit their own preferences or usage habits. Layout.
  • the third control is split into the same type of action or different types of controls, so that the user can split the controls in a targeted manner according to different battle situations.
  • User-controlled virtual objects improve combat efficiency.
  • the split control can realize functions corresponding to different types of actions, and it is convenient for the user to split the third control according to different functions. .
  • the split control By splitting the third control according to the association relationship between at least two actions corresponding to the third control when it is triggered, the split control can realize different levels of similar actions, which is convenient for users to perform different actions according to different combat situations.
  • the action of the level is conducive to improving the combat efficiency of virtual objects.
  • the split control can attack different virtual objects, which is beneficial to improve the combat efficiency of the virtual objects.
  • the virtual environment screen is updated and displayed, the updated virtual environment screen displays the split controls, and the user can control the virtual object more intuitively through the updated virtual environment screen.
  • the virtual objects can be controlled to perform different operations, which makes the methods of controlling virtual objects more flexible and diverse, and helps users to set the UI controls that suit their own preferences or usage habits. layout.
  • the display device of the virtual environment screen provided by the above-mentioned embodiment is only illustrated by the division of the above-mentioned functional modules.
  • the internal structure is divided into different functional modules to complete all or part of the functions described above.
  • the device for displaying a virtual environment screen provided by the above embodiments and the embodiment of the method for displaying a virtual environment screen belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which will not be repeated here.
  • FIG. 17 shows a structural block diagram of a computer device 1700 provided by an exemplary embodiment of the present application.
  • the computer device 1700 can be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Expert Compression Standard Audio Layer 4) Player.
  • Computer device 1700 may also be referred to by other names such as user equipment, portable terminal, and the like.
  • computer device 1700 includes: processor 1701 and memory 1702 .
  • the processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • Memory 1702 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1702 is used for storing at least one instruction, and the at least one instruction is used for being executed by the processor 1701 to realize the virtual environment screen provided in the embodiments of the present application display method.
  • the computer device 1700 may also optionally include: a peripheral device interface 1703 and at least one peripheral device.
  • the peripheral device includes: at least one of a radio frequency circuit 1704 , a touch display screen 1705 , a camera assembly 1706 , an audio circuit 1707 , a positioning assembly 1708 and a power supply 1709 .
  • the peripheral device interface 1703 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1701 and the memory 1702 .
  • I/O Input/Output
  • the radio frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1704 communicates with communication networks and other communication devices via electromagnetic signals.
  • the radio frequency circuit 1704 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the touch display screen 1705 is used to display UI (User Interface, user interface).
  • UI User Interface, user interface
  • the camera assembly 1706 is used to capture images or video.
  • Audio circuitry 1707 is used to provide an audio interface between the user and computer device 1700 .
  • Audio circuitry 1707 may include a microphone and speakers.
  • the positioning component 1708 is used to locate the current geographic location of the computer device 1700 to implement navigation or LBS (Location Based Service).
  • LBS Location Based Service
  • Power supply 1709 is used to power various components in computer device 1700 .
  • computer device 1700 also includes one or more sensors 1710 .
  • the one or more sensors 1710 include, but are not limited to, an acceleration sensor 1711 , a gyro sensor 1712 , a pressure sensor 1713 , a fingerprint sensor 1714 , an optical sensor 1715 , and a proximity sensor 1716 .
  • FIG. 17 does not constitute a limitation on the computer device 1700, and may include more or less components than those shown, or combine some components, or adopt different component arrangements.
  • An embodiment of the present application further provides a computer device, the computer device includes a processor and a memory, and the memory stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the at least one piece of program, the The code set or the instruction set is loaded and executed by the processor to implement the method for displaying a virtual environment screen provided by the above method embodiments.
  • Embodiments of the present application further provide a computer-readable storage medium, in which at least one instruction, at least one piece of program, code set or instruction set is stored, and the at least one instruction, at least one piece of program, code set or instruction set is processed by The browser loads and executes the method to realize the display method of the virtual environment picture provided by the above method embodiments.
  • Embodiments of the present application further provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for displaying a virtual environment screen as described above.
  • references herein to "a plurality” means two or more.
  • "And/or" which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟环境画面的显示方法、装置、设备及存储介质,属于人机交互领域。该方法包括:显示虚拟环境画面,虚拟环境画面显示有第一控件和第二控件,第一控件和第二控件属于不同类型的控件(401),第一控件用于控制第一虚拟对象执行第一动作,第二控件用于控制第一虚拟对象执行第二动作;接收合并设置操作(402);基于合并设置操作将第一控件和第二控件合并为第三控件(403),第三控件用于控制第一虚拟对象执行第一动作和第二动作。通过改变UI控件在虚拟环境画面上的布局,简化了用户控制虚拟对象的过程,提高了人机交互效率。

Description

虚拟环境画面的显示方法、装置、设备及存储介质
本申请要求于2020年08月26日提交的申请号为202010870309.1、发明名称为“虚拟环境画面的显示方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人机交互领域,特别涉及一种虚拟环境画面的显示方法、装置、设备及存储介质。
背景技术
在基于虚拟环境的应用程序中,如第一人称射击游戏,用户可以控制虚拟对象执行蹲下、趴下、射击、奔跑等动作。
游戏在运行时显示有虚拟环境画面,在虚拟环境画面上显示有多个用户界面控件(User Interface,UI控件),多个UI控件按照一定的布局分布在虚拟环境画面上,每个UI控件用于实现控制虚拟对象执行一个动作,比如,UI控件1控制虚拟对象执行下蹲动作,UI控件2控制虚拟对象执行趴下动作。
上述技术方案中,当UI控件的数量较多时,UI控件在虚拟环境画面上的布局通常较为复杂,人机交互效率较低。
发明内容
本申请实施例提供了一种虚拟环境画面的显示方法、装置、设备及存储介质,通过改变UI控件在虚拟环境画面上的布局,提高了人机交互效率。所述技术方案如下:
根据本申请的一个方面,提供了一种虚拟环境画面的显示方法,应用于计算机设备,所述方法包括:
显示虚拟环境画面,所述虚拟环境画面显示有第一控件和第二控件,所述第一控件和所述第二控件属于不同的控件类型;
接收针对所述第一控件和所述第二控件的合并设置操作;
基于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件。
根据本申请的另一方面,提供了一种虚拟环境画面的显示装置,所述装置包括:
显示模块,用于显示虚拟环境画面,所述虚拟环境画面显示有第一控件和第二控件,所述第一控件和所述第二控件属于不同的控件类型;
接收模块,用于接收针对所述第一控件和所述第二控件的合并设置操作;
处理模块,用于基于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件。
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上方面所述的虚拟环境画面的显示方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上方面所述的虚拟环境画面的显示方法。
根据本申请的另一方面,提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行如上方面所述的虚拟环境画面的显示方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过接收到的合并设置操作,将虚拟环境画面上显示的不同类型的控件进行合并,使得用户可以通过自主设置,将不常用的UI控件合并为同一UI控件,或者将需要结合使用的UI控件合并为同一UI控件,通过合并UI控件简化了UI控件在虚拟环境画面上的布局,从而简化了用户控制虚拟对象的过程,提高了人机交互效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的计算机系统的框图;
图2是本申请一个示例性实施例提供的状态同步技术的示意图;
图3是本申请一个示例性实施例提供的帧同步技术的示意图;
图4是本申请一个示例性实施例提供的虚拟环境画面的显示方法的流程图;
图5是本申请另一个示例性实施例提供的虚拟环境画面的显示方法的流程图;
图6是本申请一个示例性实施例提供的合并控件之前虚拟环境画面的示意图;
图7是本申请一个示例性实施例提供的合并设置操作对应的设置界面的示意图;
图8是本申请一个示例性实施例提供的更新后的虚拟环境画面的示意图;
图9是本申请另一个示例性实施例提供的虚拟环境画面的显示方法的流程图;
图10是本申请另一个示例性实施例提供的虚拟环境画面的显示方法的流程图;
图11是本申请一个示例性实施例提供的拆分控件的示意图;
图12是本申请另一个示例性实施例提供的更新后的虚拟环境画面的示意图;
图13是本申请另一个示例性实施例提供的虚拟环境画面的显示方法的流程图;
图14是本申请一个示例性实施例提供的合并设置操作的方法的流程图;
图15是本申请一个示例性实施例提供的判断虚拟对象的状态的方法流程图;
图16是本申请一个示例性实施例提供的虚拟环境画面的显示装置的框图;
图17是本申请一个示例性实施例提供的计算机设备的装置结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行介绍:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。
虚拟对象:是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在三维虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。虚拟对象泛指虚拟环境中的一个或多个虚拟对象。
用户界面(User Interface,UI)控件:是指在应用程序的用户界面上能够看见的任何可视控件或元素,比如,图片、输入框、文本框、按钮、标签等控件,其中一些UI控件响应用户的操作,比如,用户在输入框中能够输入文字,用户通过上述UI控件与用户界面进行信息交互。
本申请中提供的方法可以应用于虚拟现实应用程序、三维地图程序、军事仿真程序、第一人称射击游戏(First-Person Shooting Game,FPS)游戏、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)、大逃杀类型的射击游戏、虚拟现实(Virtual Reality,VR)应用程序、增强现实(Augmented Reality,AR)等,下述实施例是以在游戏中的应用来举例说明。
基于虚拟环境的游戏由一个或多个游戏世界的地图构成,游戏中的虚拟环境模拟真实世界的场景,用户可以操控游戏中的虚拟对象在虚拟环境中进行行走、跑步、跳跃、射击、格斗、驾驶、受到其他虚拟对象的攻击、受到虚拟环境中的伤害、攻击其他虚拟对象、使用干扰型投掷类道具、救助同队的队友等动作,交互性较强,并且多个用户可以在线组队进行竞技游戏。在用户使用的终端上显示有游戏对应的虚拟环境画面,虚拟环境画面是以用户控制的虚拟对象为视角对虚拟环境进行观察得到的。在虚拟环境画面上显示有多个UI控件,形成用户界面,每个UI控件用于控制虚拟对象执行不同的动作。比如,用户触发UI控件1,控制虚拟对象向前奔跑。
图1示出了本申请一个示例性实施例提供的计算机系统的结构框图。该计算机系统100包括:第一终端120、服务器140和第二终端160。
第一终端120安装和运行有支持虚拟环境的应用程序。第一终端120是第一用户使用的终端,第一用户使用第一终端120控制位于虚拟环境中的第一虚拟对象进行活动,该活动包括但不限于:调整身体姿态、行走、奔跑、跳跃、骑行、瞄准、拾取、使用投掷类道具、攻击其他虚拟对象中的至少一种。示意性的,第一虚拟对象是第一虚拟人物,比如仿真人物对象或动漫人物对象。
第一终端120通过无线网络或有线网络与服务器140相连。
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。示意性的,服务器140包括处理器144和存储器142,存储器142又包括接收模块1421、控制模块1422和发送模块1423,接收模块1421用于接收客户端发送的请求,如组队请求;控制模块1422用于控制虚拟环境画面的渲染;发送模块1423用于向客户端发送响应,如向客户端发送组队成功的提示信息。服务器140用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。
服务器140可以采用同步技术使得多个客户端之间的画面表现一致。示例性的,服务器140采用的同步技术包括:状态同步技术或帧同步技术。
状态同步技术
在基于图1的可选实施例中,服务器140采用状态同步技术在多个客户端之间进行同步。如图2所示,战斗逻辑运行在服务器140中。当虚拟环境中的某个虚拟对象发生状态变化时,由服务器140向所有的客户端,比如客户端1至客户端10,发送状态同步结果。
在一个示例中,客户端1向服务器140发送请求,该请求用于请求虚拟对象1执行攻击虚拟对象2的动作,则服务器140判断虚拟对象1是否能够攻击虚拟对象2,以及当虚拟对象1执行攻击动作后虚拟对象2的剩余生命值。服务器140将虚拟对象2的剩余生命值同步给所有的客户端,所有的客户端根据虚拟对象2的剩余生命值更新本地数据以及界面表现。
帧同步技术
在基于图1的可选实施例中,服务器140采用帧同步技术在多个客户端之间进行同步。如图3所示,战斗逻辑运行在各个客户端中。客户端会向服务器发送帧同步请求,该帧同步请求中携带有客户端本地的数据变化。服务器140在接收到某个帧同步请求后,向所有的客户端转发该帧同步请求。每个客户端接收到帧同步请求后,按照本地的战斗逻辑对该帧同步请求进行处理,更新本地数据以及界面表现。
第二终端160安装和运行有支持虚拟环境的应用程序。第二终端160是第二用户使用的终端,第二用户使用第二终端160控制位于虚拟环境中的第二虚拟对象进行活动,该活动包括但不限于:调整身体姿态、行走、奔跑、跳跃、骑行、瞄准、拾取、使用投掷类道具、攻击其他虚拟对象中的至少一种。示意性的,第二虚拟对象是第二虚拟人物,比如仿真人物对象或动漫人物对象。
可选地,第一虚拟对象和第二虚拟对象处于同一虚拟环境中。可选地,第一虚拟对象和第二虚拟对象可以属于同一个队伍、同一个组织、同一个阵营、具有好友关系或具有临时性的通讯权限;或者,第一虚拟人物对象和第二虚拟人物对象也可以属于不同阵营、不同队伍、不同的组织或具有敌对关系。
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台(安卓或IOS)上的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。第一终端120和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。以下实施例以终端包括智能手机来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
图4示出了本申请一个示例性实施例提供的虚拟环境画面的显示方法的流程图,该方法可应用于计算机设备,以该计算机设备实现为如图1所示的第一终端120或第二终端160,或该计算机系统100中的其它终端为例进行说明。该方法包括如下步骤:
步骤401,显示虚拟环境画面以及第一控件和第二控件,第一控件和第二控件属于不同的控件类型。
在一些实施例中,第一控件用于控制第一虚拟对象执行第一动作,第二控件用于控制第一虚拟对象执行第二动作。第一动作和第二动作属于不同类型的动作;或者,第一动作和第二动作为不同动作。
示意性的,第一控件用于控制第一虚拟对象在虚拟环境中进行第一表现;或者,第一控件用于控制第一虚拟对象在虚拟环境中使用第一道具;或者,第一控件用于控制第一虚拟对象触发第一技能;或者,第一控件用于控制第一虚拟对象处于第一运动状态。同理,第二控件对应的第二动作包括第二表现、使用第二道具、触发第二技能、处于第二运动状态等,本实施例对第一控件和第二控件的控件作用不加以限定。
用户使用的终端上运行有支持虚拟环境的应用程序,如第一人称射击游戏。以游戏应用程序为例,在运行游戏应用程序时,显示有虚拟环境画面,该虚拟环境画面是以第一虚拟对象的视角对虚拟环境进行观察得到的画面。在一些实施例中,虚拟环境画面显示的虚拟环境中包括:山川、平地、河流、湖泊、海洋、沙漠、天空、植物、建筑、交通工具中的至少一种元素。
UI控件的控件类型主要用于指示UI控件对应触发的功能类型。在虚拟环境画面上显示有UI控件,示意性的,UI控件包括:辅助类型的UI控件、移动类型的UI控件、瞄准类型的UI控件和状态切换类型的UI控件中的至少一种。
辅助类型的UI控件用于辅助虚拟对象进行活动,在一些实施例中,辅助类型的UI控件用于控制虚拟对象使用辅助类型的虚拟道具进行活动的辅助,或者,辅助类型的UI控件用于控制虚拟对象触发辅助类型的技能进行活动的辅助,比如,开镜控件属于辅助类型的UI控件,用于控制虚拟对象在进行射击活动时使用瞄准镜道具瞄准目标;移动类型的UI控件用于控制虚拟对象在虚拟环境中移动,比如,方向移动控件属于移动类型的UI控件,当方向移动控件被触发时,虚拟对象在虚拟环境中向前、向后、向左、向右地移动;瞄准类型的UI控件是虚 拟对象在使用虚拟道具时对应的UI控件,在一些实施例中,瞄准类型的UI控件是虚拟对象在使用攻击道具时对应的UI控件,比如,射击控件属于瞄准类型的UI控件,当射击控件被触发时,虚拟对象向目标射击;状态切换类型的UI控件用于切换虚拟对象在虚拟环境中的姿态,比如,下蹲控件属于状态切换类型的UI控件,当下蹲控件被触发时,虚拟对象由站立状态切换为下蹲状态,或由其他姿态切换为下蹲状态。
步骤402,接收针对第一控件和第二控件的合并设置操作。
当用户使用具有触控显示屏的终端时,通常终端的触控显示屏上显示有UI控件,如智能手机或平板电脑等。示意性的,用户通过触发与合并设置操作对应的UI控件实施合并设置操作,或,用户在触控显示屏上实施与合并设置操作对应的手势操作,比如,单击操作、长按操作、双击操作(包括单指双击操作、多指双击操作中的至少一种)、悬停操作、拖动操作以及它们的组合操作等。
当用户使用的终端是连接有外部输入设备的终端时,还可以通过外部输入设备执行合并设置操作。示意性的,终端实现为连接有鼠标的笔记本电脑,用户将鼠标指针移动至与合并设置操作对应的UI控件上,通过点击鼠标实施合并设置操作。在一些实施例中,用户还可以通过按压键盘按键以及点击鼠标实施合并设置操作。
在一些实施例中,虚拟环境画面上单独显示有合并设置操作对应的UI控件,该UI控件被命名为合并UI控件;在另一些实施例中,虚拟环境画面包括设置游戏应用程序的设置页面,该设置页面包括合并设置操作对应的UI控件。
步骤403,响应于合并设置操作将第一控件和第二控件合并为第三控件。
可选地,第三控件用于控制第一虚拟对象执行第一动作(与第一控件对应)和第二动作(与第二控件对应)。或者,第三控件用于控制第一虚拟对象执行一个独立于第一动作和第二动作之外的第三动作。
合并是指将至少两个控件合成为一个控件,且在虚拟环境画面上只显示合并后的控件,合并后的控件兼具合并前的控件的功能。在接收到合并设置操作后,在虚拟环境画面上显示第三控件,并取消显示第一控件和第二控件。第三控件兼具第一控件对应的功能和第二控件对应的功能,用户通过触发第三控件可实现控制第一虚拟对象执行第一动作和第二动作。
示意性的,当用户点击第三控件时,第一虚拟对象执行第一动作;当用户长按第三控件时,第一虚拟对象执行第二动作;或者,当用户点击第三控件时,第一虚拟对象执行第一动作,当用户再次点击第三控件时,游戏应用程序判断第一虚拟对象正在执行第一动作,则控制第一虚拟对象在执行第一动作的同时,执行第二动作;或者,当用户点击第三控件时,第一虚拟对象执行第一动作,当用户再次点击第三控件时,游戏应用程序判断第一虚拟对象已执行第一动作(第一动作已执行完),则控制第一虚拟对象执行第二动作。
可选地,UI控件在应用程序中对应有控件标识,在接收到合并设置操作时,确定关于第一控件和第二控件对应的控件标识,并基于第一控件和第二控件对应的控件标识确定第三控件对应的控件标识,并在界面渲染时,将根据第三控件的控件标识对第三控件进行渲染,并取消对地第一控件和第二控件的渲染。
可以理解的是,用户实施一次合并设置操作对至少两个控件进行合并。
综上所述,本实施例提供的方法,通过接收到的合并设置操作,将虚拟环境画面上显示的不同类型的控件进行合并,使得用户可以通过自主设置,将不常用的UI控件合并为同一UI控件,或者将需要结合使用的UI控件合并为同一UI控件,通过改变UI控件在虚拟环境画面上的布局,简化了用户控制虚拟对象的过程,提高了人机交互效率。
可选地,通过在虚拟环境画面上实施合并设置操作将至少两个控件进行合并,或者,通过在虚拟环境画面上实施拆分设置操作将一个控件拆分成至少两个控件。以游戏应用程序为例,结合游戏应用程序中的用户界面对合并控件的过程和拆分控件的过程进行说明。
一、合并控件的过程。
图5示出了本申请另一个示例性实施例提供的虚拟环境画面的显示方法的流程图。该方法可应用于计算机设备中,该计算机设备实现为如图1所示的第一终端120或第二终端160,或该计算机系统100中的其它终端。该方法包括如下步骤:
步骤501,显示虚拟环境画面,虚拟环境画面显示有第一控件和第二控件,第一控件和第二控件属于不同的控件类型。
第一控件用于控制第一虚拟对象执行第一动作,第二控件用于控制第一虚拟对象执行第二动作。
如图6所示,在虚拟环境画面上显示有第一控件11和第二控件12,第一控件11是下蹲控件,第二控件12是瞄准控件,第一控件11属于状态切换类型的控件,第二控件12属于移动类型的控件,第一控件11和第二控件12位于虚拟环境画面的右侧。
步骤502,接收针对第一控件和第二控件的合并设置操作。
示意性的,合并设置操作实现为将需要合并的控件拖动至同一处。比如,用户将第一控件11拖动至第二控件12处,或,用户将第二控件12拖动至第一控件11处。
示意性的,如图7所示,合并设置操作是用户在设置界面中启用的操作,用户将合并设置操作对应的控件20置于打开的状态,则在游戏应用程序中能够将第一控件11和第二控件12进行合并。
步骤503,获取第一控件的第一控件类型和第二控件的第二控件类型。
示意性的,游戏应用程序根据用户拖动时选择的控件获取第一控件和第二控件的控件类型。
示意性的,根据用户在设置界面的操作获取待合并控件的控件类型。
在一些实施例中,用户在设置界面的操作用于将所有相同类型的控件进行合并,或,用于将预设类型的控件进行合并(比如,将射击类型的控件和移动类型的控件进行合并),或,用于将预设的控件进行合并(比如,将下蹲控件和趴下控件进行合并)。
步骤504,响应于第一控件类型和第二控件类型满足预设条件,将第一控件和第二控件合并为第三控件。
预设条件包括如下条件中的至少一种:
第一控件类型是辅助类型,第二控件类型是瞄准类型;示意性的,第一控件是开镜控件(用于打开枪械类虚拟道具的瞄准镜的控件),第二控件是射击控件;
第一控件类型是辅助类型,第二控件类型是移动类型;示意性的,第一控件是开镜控件,第二控件是方向移动控件(包括向左移动控件、向右移动控件、向前移动控件、向后移动控件);
第一控件类型是移动类型,第二控件类型是瞄准类型;示意性的,第一控件是方向移动控件,第二控件是射击控件;第一控件是方向移动控件,第二控件是投掷控件(用于使用投掷类虚拟道具的控件);
第一控件类型是移动类型,第二控件类型是状态切换类型;示意性的,第一控件是方向移动控件,第二控件是下蹲控件;
第一控件类型是状态切换类型,第二控件类型是辅助类型;示意性的,第一控件是趴下控件(用于控制虚拟对象呈趴下姿态的控件),第二控件是开镜控件;
第一控件类型是状态切换类型,第二控件类型是瞄准类型,示意性的,第一控件是下蹲控件,第二控件是射击控件。
当第一控件的类型和第二控件的类型是上述类型时,游戏应用程序将第一控件和第二控件进行合并。
步骤505,更新显示用于替换第一控件和第二控件的第三控件。
在虚拟环境画面上更新显示第三控件,更新后的虚拟环境画面不包括第一控件和第二控件。
在用户实施合并设置操作时,游戏应用程序识别合并设置操作对应的用户帐号,游戏应 用程序根据用户帐号更新用户帐号对应的虚拟环境画面。
如图8所示,在更新后的虚拟环境画面上显示有第三控件13,示意性的,图8中以如图6所示出的第一控件11的控件标识作为当前更新后第三控件13的控件标识为例进行示出;可选地,或者以第二控件的控件标识作为第三控件的控件标识;或新生成一个控件标识作为第三控件,该控件标识不同于第一控件的控件标识和第二控件的控件标识。
在使用第三控件控制第一虚拟对象执行动作时,包括如下情况中的至少一种,如图9所示:
1、第一虚拟对象执行的动作与第三控件接收到的操作有关。
步骤507a,响应于针对第三控件的第一操作,控制第一虚拟对象执行第一动作。
第一操作包括单击操作、长按操作、滑动操作、悬停操作、拖动操作、双击操作(包括单指双击和多指双击中的至少一种)以及它们的组合操作中的至少一种。
在一个示例中,响应于第三控件上接收到长按操作,控制第一虚拟对象执行奔跑动作。
步骤508a,响应于针对第三控件的第二操作,控制第一虚拟对象执行第二动作。
第二操作包括单击操作、长按操作、滑动操作、悬停操作、拖动操作、双击操作(包括单指双击和多指双击中的至少一种)以及它们的组合操作中的至少一种。第一操作与第二操作不同。
在一个示例中,响应于第三控件上接收到双击操作,控制第一虚拟对象执行开镜动作。
第三控件根据接收到的操作类型生成控制指令,控制第一虚拟对象执行不同的动作。步骤507a可先于步骤507b执行,或,步骤507b先于步骤507a执行。在一些实施例中,在第一虚拟对象执行动作a时,第三控件接收到动作b对应的执行操作,则第一虚拟对象在执行动作a的同时执行动作b。在一个示例中,响应于第三控件上接收到双击操作,第一虚拟对象执行奔跑动作,在第一虚拟对象奔跑的过程中,响应于第三控件上接收到长按动作,第一虚拟对象在奔跑的过程中执行开镜动作。
2、第一虚拟对象同时执行第三控件对应的动作。
步骤507b,响应于针对第三控件的第三操作,控制第一虚拟对象同时执行第一动作和第二动作。
第三操作包括单击操作、长按操作、滑动操作、悬停操作、拖动操作、双击操作(包括单指双击和多指双击中的至少一种)以及它们的组合操作中的至少一种。
在一个示例中,响应于第三控件接收到拖动操作,游戏应用程序控制第一虚拟对象同时执行奔跑动作和换弹动作。
3、第一虚拟对象根据动作的优先级执行动作。
步骤507c,响应于针对第三控件的第四操作,获取第一动作和第二动作的优先级。
第四操作包括单击操作、长按操作、滑动操作、悬停操作、拖动操作、双击操作(包括单指双击和多指双击中的至少一种)以及它们的组合操作中的至少一种。
在一个示例中,响应于接收到第三控件上的长按操作,游戏应用程序获取第三控件对应的动作的优先级。示意性的,优先级排序:奔跑动作>射击动作(或投掷动作)>下蹲动作(或趴下动作)>开镜动作(换弹动作)。
步骤508c,基于优先级控制第一虚拟对象按照预设顺序执行第一动作和第二动作。
比如,响应于针对第三控件的长按操作,获取第一动作的优先级低于第二动作的优先级,则游戏应用程序控制第一虚拟对象先执行第二动作,然后执行第一动作。
综上所述,本实施例提供的方法,通过接收到的合并设置操作,将虚拟环境画面上显示的不同类型的控件进行合并,使得用户可以通过自主设置,将不常用的UI控件合并为同一UI控件,或者将需要结合使用的UI控件合并为同一UI控件,通过改变UI控件在虚拟环境画面上的布局,简化了用户控制虚拟对象的过程,提高了人机交互效率。
通过判断第一控件的第一控件类型和第二控件的第二控件类型满足预设条件,将第一控 件和第二控件合并为第三控件,使得用户能够通过合并设置操作将不同类型的控件进行合并,从而使得UI控件在虚拟环境画面上的布局更加灵活。
通过将满足预设条件的UI控件类型进行列举,使得用户确定可合并的UI控件的类型,灵活合并UI控件。
当第一控件和第二控件合并后,更新显示虚拟环境画面,更新后的虚拟环境画面显示有合并后的第三控件,用户可以更加直观地通过更新后的虚拟环境画面控制虚拟对象。
当第三控件上接收到不同的操作时,控制虚拟对象按照不同的规则执行不同的动作,使得控制虚拟对象的方式更加灵活多样,有助于用户设置符合自己喜好或适合自己使用习惯的UI控件的布局。
可以理解的是,上述三种情况可以分别单独实施,也可以组合实施。
二、拆分控件的过程。
在拆分控件的过程中包括如下三种情况,如图10所示:
1、拆分后的控件属于不同类型的控件。
响应于针对第三控件的第一拆分设置操作,将第三控件拆分成第一控件和第二控件。
步骤511a,响应于针对第三控件的第一拆分设置操作,获取第三控件对应的动作类型。
为了区分于合并设置操作,拆分设置操作可以与合并设置操作的实施方式相反,比如,合并设置操作是向左拖动操作,拆分设置操作是向右拖动操作;又比如,合并设置操作是将第一控件拖动至第二控件处,以形成第三控件,则拆分设置操作是以第三控件13为起始点,从第三控件13向外拖动出控件作为第一控件11或第二控件12,箭头表示拖动方向,如图11所示。
动作类型与控件的控件类型对应。示意性的,控件1的控件类型为辅助类型,则控件1被触发时虚拟对象执行辅助类型的动作,如控件1是开镜控件,则当开镜控件被触发时,虚拟对象执行开镜动作,开镜动作的动作类型为辅助类型。
由于第三控件是合并后的控件,因此第三控件兼具至少两种控件的功能。在一些实施例中,响应于接收到虚拟环境画面上的拆分设置操作,游戏应用程序获取第三控件对应的动作列表,该动作列表用于提供第三控件的控件组成情况。比如,当虚拟环境画面中存在合并后的控件时,游戏应用程序或后台服务器建立合并后的控件对应的动作列表,并将该动作列表与合并后的控件进行绑定。可选地,动作列表中存储有第三控件的控件标识,以及与第三控件具有拆分关系的至少两个控件对应的控件标识,从而在第三控件需要拆分时,界面渲染过程中取消对第三控件的渲染,替换为对列表中与第三控件的控件标识对应的至少两个控件的渲染。
步骤512a,基于动作类型将第三控件拆分成第一控件和第二控件。
在一个示例中,第三控件对应的动作列表包括下蹲动作和开镜动作,则将第三控件拆分为下蹲控件和开镜控件。
2、拆分后的控件属于同一类型的控件。
响应于针对第三控件的第二拆分设置操作,将第三控件拆分成第四控件和第五控件,第四控件和第五控件属于同一类型的控件。
步骤511b,响应于针对第三控件的第二拆分设置操作,获取第三控件对应的至少两个动作之间的关联关系。
关联关系是指第三控件被触发时,虚拟对象执行的动作之间的层级关系。示意性的,第三控件对应的动作为姿态切换动作,各个姿态切换动作之间存在层级关系,比如,当第三控件被触发时,控制虚拟对象执行全蹲动作(双膝弯曲且双腿靠近臀部)和半蹲动作(单膝跪地状态),获取全蹲动作和半蹲动作之间的层级关系。
步骤512b,基于关联关系将第三控件拆分成第四控件和第五控件。
在一个示例中,基于关联关系将第三控件拆分为全蹲动作和半蹲动作;在另一个示例中, 游戏应用程序根据关联关系将第三控件拆分为下蹲控件和趴下控件;在另一个示例中,游戏应用程序根据关联关系将第三控件拆分为打开高倍瞄准镜动作和打开低倍瞄准镜动作。
3、基于虚拟对象的数量进行拆分。
示意性的,虚拟环境画面是以第一虚拟对象的视角对虚拟环境进行观察得到的画面,通常在第一虚拟对象身上绑定有摄像机模型,通过摄像机模型对虚拟环境进行拍摄,得到虚拟环境画面。基于虚拟环境画面中虚拟对象的数量对第三控件进行拆分。
示意性的,响应于第一虚拟对象的视角内包括至少两个第二虚拟对象,基于第二虚拟的数量对第三控件进行拆分。
步骤511c,响应于第一虚拟对象的视角内包括至少两个第二虚拟对象,获取第二虚拟对象的数量。
第一虚拟对象和第二虚拟对象在同一虚拟环境中,第一虚拟对象在虚拟环境活动的过程中,会看到虚拟环境中的第二虚拟对象。示意性的,第二虚拟对象与第一虚拟对象具有队友关系。在一个示例中,第一虚拟对象的视角中包括三个第二虚拟对象,游戏应用程序将第二虚拟对象的数量与第一虚拟对象观察的虚拟环境画面进行绑定。
步骤512c,基于第二虚拟的数量对第三控件进行拆分。
示意性的,根据三个虚拟对象将第三控件拆分为三个控件。比如,将第三控件拆分为控件1、控件2和控件3,控件1用于攻击虚拟对象1,控件2用于攻击虚拟对象2,控件3用于攻击虚拟对象3。
在一些实施例中,响应于第二虚拟对象使用虚拟道具,基于第二虚拟对象和第二虚拟对象使用的虚拟道具对第三控件进行拆分。比如,第二虚拟对象使用的虚拟道具是盾牌(用于减少对虚拟对象造成的伤害),将第三控件拆分为两个控件,一个控件用于摧毁第二虚拟对象使用的虚拟道具(盾牌),一个控件用于攻击第二虚拟对象。在一些实施例中,用于摧毁虚拟道具的控件持续的时长大于用于攻击第二虚拟对象的控件持续的时长。
可以理解的是,上述三种情况可以分别单独实施,也可以组合实施。
步骤513,更新显示第三控件拆分后用于替代第三控件的至少两个拆分控件。
示意性的,更新显示第三控件拆分后对应的至少两个拆分控件,更新后的虚拟环境画面不包括第三控件。
在用户实施拆分设置操作时,游戏应用程序识别拆分设置操作对应的用户帐号,游戏应用程序根据用户帐号更新用户帐号对应的虚拟环境画面。
如图12所示,在更新后的虚拟环境画面上显示有拆分后的第四控件14和第五控件15,第四控件14(全蹲控件)和第五控件15(半蹲控件)是由下蹲控件13拆分后的得到的。在更新后的虚拟环境画面上不显示下蹲控件13(图中仅为示意)。示意性的,第四控件14的控件标识和第五控件15的控件标识是新生成的控件标识,且这两个控件标识不同于第三控件13的控件标识;或以第三控件13的控件标识作为第四控件14的控件标识,第五控件15的控件标识是新生成的控件标识;或以第三控件13的控件标识作为第五控件15的控件标识,第四控件14的控件标识是新生成的控件标识。
在使用第三控件拆分后的控件控制第一虚拟对象执行动作时,包括如下两种情况,如图13所示:
1、第一虚拟对象同时执行至少两个动作。
步骤516a,响应于针对第一控件的第一操作,控制第一虚拟对象执行第一动作。
第一控件的控件类型和第二控件的控件类型不同。
步骤517a,响应于针对第二控件的第二操作,控制第二虚拟对象在执行第一动作的同时,执行第二动作。
示意性的,第一虚拟对象执行投掷动作的过程中(投掷动作还未结束),响应于接收到第二控件上的长按操作,游戏应用程序控制第一虚拟对象在下蹲的过程中投掷虚拟道具。用户 可根据下蹲的角度控制第一虚拟对象准确击中目标。
2、第一虚拟对象按照顺序执行至少两个动作。
步骤516b,响应于针对第四控件的第三操作,控制第一虚拟对象执行第四动作。
示意性的,第四控件为趴下控件,响应于趴下控件上接收到拖动操作,游戏应用程序控制第一虚拟对象执行趴下动作。
步骤517b,响应于针对第五控件的第四操作,控制第一虚拟对象执行第五动作,第四动作和第五动作具有关联关系。
可以理解的是,第三操作和第四操作相同或不同。
关联关系是指动作之间存在一定的层级关系,它们属于同一类型的动作。示意性的,第五控件为匍匐控件(用于控制第一虚拟对象匍匐前进),响应于匍匐控件上接收到双击操作,游戏应用程序控制第一虚拟对象在虚拟环境中匍匐移动。可以理解的是,步骤516b可先于步骤517b执行,步骤517b可先于步骤516b执行。
综上所述,本实施例提供的方法,当接收到拆分设置操作,将第三控件拆分成相同类型的动作或不同类型的控件,使得用户可针对不同的战况有针对性地拆分控件,在符合用户的操作习惯的同时,保证用户控制的虚拟对象提高对战效率。
通过基于第三控件在被触发时对应的动作类型对第三控件进行拆分,使得拆分后的控件可实现不同类型的动作对应的功能,方便用户根据不同的功能对第三控件进行拆分。
通过根据第三控件在被触发时对应的至少两个动作之间的关联关系对第三控件进行拆分,使得拆分后的控件可实现同类动作的不同层级,方便用户根据不同对战情况执行不同层级的动作,有利于提高虚拟对象的作战效率。
通过根据第一虚拟对象视野内的第二虚拟对象的数量来拆分第三控件,使得拆分后的控件可针对不同的虚拟对象进行攻击,有利于提高虚拟对象的作战效率。
当第三控件拆分后,更新显示虚拟环境画面,更新后的虚拟环境画面显示有拆分后的控件,用户可以更加直观地通过更新后的虚拟环境画面控制虚拟对象。
当拆分后的控件上接收到不同的操作时,可控制虚拟对象执行不同的操作,使得控制虚拟对象的方式更加灵活多样,有助于用户设置符合自己喜好或适合自己使用习惯的UI控件的布局。
可以理解的是,上述两种情况可以分别单独实施,也可以组合实施。
在一个示例中,控件合并包括如下步骤,如图14所示:
步骤1401,开始。
步骤1402,用户是否进行合并设置操作。
当用户进行合并设置操作时,进入步骤1403a;当用户不进行合并设置操作时,进入步骤1403b。
步骤1403a,触发趴下控件的时长超过时间阈值。
当用户进行合并设置操作时,游戏应用程序将第一控件和第二控件进行合并,得到第三控件。示意性的,第一控件为蹲下控件,第二控件为趴下控件。将第一控件(蹲下控件)的控件标识作为第三控件的控件标识,隐藏显示第二控件(趴下控件)的控件标识。
示意性的,时间阈值为0.5秒,第三控件上接收到的触发操作超过0.5秒。
步骤1404a,隐藏显示趴下控件。
此时满足趴下动作的执行条件,虚拟环境画面上未显示有原有的第二控件(趴下控件)。
步骤1405a,开启下蹲控件的趴下状态提示。
此时合并后的第三控件具有支持下蹲动作和趴下动作两种功能。示意性的,通过下蹲控件高亮的形式提示用户,此时的下蹲控件用于控制第一虚拟对象执行下蹲动作。当下蹲控件未呈高亮状态时,下蹲控件用于控制第一虚拟对象执行下蹲动作。
步骤1403b,关闭长按下蹲控件触发趴下操作。
当用户未进行合并设置操作时,此时第一控件(蹲下控件)和第二控件(趴下控件)未合并。
步骤1404b,显示趴下控件。
虚拟环境画面上单独显示有趴下控件。
步骤1405b,关闭下蹲控件的趴下状态提示。
下蹲控件用于控制第一虚拟对象在执行下蹲动作,趴下控件用于控制第一虚拟对象执行趴下动作。两种控件的功能并不合并。
在一局游戏中,上述步骤1402至步骤1405a(步骤1405b)可以循环实施,直到一局游戏结束。
在一个示例中,在对控件合并后,游戏应用程序判断用户的操作是控制第一虚拟对象执行趴下动作还是下蹲动作,包括如下步骤,如图15所示:
步骤1501,开始。
步骤1502,是否触发合并后的控件。
当用户设置合并操作后,游戏应用程序将第一控件和第二控件进行合并,得到第三控件。当用户触发第三控件时,进入步骤1503a,当用户未触发第三控件,进入步骤1503b。
步骤1503a,记录初始时间△t=0。
第三控件被触发,游戏应用程序记录初始时间,以及触发过程对应的时长T。
步骤1503b,虚拟环境画面未变化。
第三控件未被触发,虚拟环境画面不发生变化。
步骤1504,计算△t=△t+T。
计算触发的初始时间△t与触发时长T的总时长。
步骤1505,用户是否松手。
游戏应用程序判断用户是否松手,若用户松手进入步骤1506;若用户不松手,返回步骤1504(继续计算时长)。
步骤1506,判断△t>0.5是否成立。
游戏应用程序判断第三控件被触发的时长是否大于时间阈值,若第三控件被触发的时长大于时间阈值,进入步骤1507a;若第三控件被触发的时长不大于时间阈值,进入步骤1507b。
步骤1507a,触发下蹲动作。
第三控件被触发的时长大于时间阈值,游戏应用程序控制第一虚拟对象执行下蹲动作。
步骤1507b,触发趴下动作。
第三控件被触发的时长大于时间阈值,游戏应用程序控制第一虚拟对象执行下蹲动作。
需要说明的是,用户每次修改操作选项后都会在内存里记录当前的选项设置,其中主键值为SettingKeys.HideProneBtn,值内容即为当前选项的状态(true或者是false)。这部分信息会存储在用户对应的计算机设备的本地数据里,当用户使用该计算机设备重新登录游戏或者用户开启新对局时候,都会从本地数据里去获取该选项的取值,以此来保持用户操作设置的一致性。
在一局游戏中,上述步骤1502至步骤1507a(步骤1507b)可以循环实施,直到一局游戏结束。
上述实施例是基于游戏的应用场景对上述方法进行描述,下面以军事仿真的应用场景对上述方法进行示例性说明。
仿真技术是应用软件和硬件通过模拟真实世界的实验,反映系统行为或过程的模型技术。
军事仿真程序是利用仿真技术针对军事应用专门构建的程序,对海、陆、空等作战元素、武器装备性能以及作战行动等进行量化分析,进而精确模拟战场环境,呈现战场态势,实现作战体系的评估和决策的辅助。
在一个示例中,士兵在军事仿真程序所在的终端建立一个虚拟的战场,并以组队的形式进行对战。士兵控制战场虚拟环境中的虚拟对象在战场虚拟环境下进行站立、蹲下、坐下、仰卧、俯卧、侧卧、行走、奔跑、攀爬、驾驶、射击、投掷、攻击、受伤、侦查、近身格斗等动作中的至少一种操作。战场虚拟环境包括:平地、山川、高原、盆地、沙漠、河流、湖泊、海洋、植被中的至少一种自然形态,以及建筑物、交通工具、废墟、训练场等地点形态。虚拟对象包括:虚拟人物、虚拟动物、动漫人物等,每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
基于上述情况,在一个示例中,士兵A控制虚拟对象a,士兵B控制虚拟对象b,士兵C控制虚拟对象c,士兵A和士兵B是同一支队伍中的士兵,士兵C与士兵A、士兵B不属于同一支队伍。以虚拟对象c的视角看到虚拟对象a和虚拟对象b,则士兵c在军事仿真程序中设置拆分操作,将射击控件拆分成两个射击控件,分别为射击控件1和射击控件2,射击控件1用于攻击虚拟对象1,射击控件2用于攻击虚拟对象2,其中,当射击控件1和射击控件2被触发时,射击控件1的持续时长大于射击控件2的持续时长(虚拟对象1佩戴有防护虚拟道具,因此需要更长的时间来攻击)。
综上所述,在本实施例中,将上述虚拟环境画面的显示方法应用在军事仿真程序中,士兵结合战术布局合并控件或拆分控件,使得控件的布局更符合自己的使用习惯,从而提高了士兵的人机交互效率,对实战现场进行了更为真实的仿真,使得士兵得到更好的训练。
以下为本申请的装置实施例,对于装置实施例中未详细描述的细节,可以结合参考上述方法实施例中相应的记载,本文不再赘述。
图16示出了本申请的一个示例性实施例提供的虚拟环境画面的显示装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为终端的全部或一部分,该装置包括:
显示模块1610,用于显示虚拟环境画面以及第一控件和第二控件,第一控件和第二控件属于不同的控件类型;
接收模块1620,用于接收针对所述第一控件和所述第二控件的合并设置操作;
处理模块1630,用于响应于合并设置操作将第一控件和第二控件合并为第三控件。
在一个可选的实施例中,该装置包括获取模块1640;
所述获取模块1640,用于响应于所述合并设置操作获取第一控件的第一控件类型和第二控件的第二控件类型;所述处理模块1630,用于响应于第一控件类型和第二控件类型满足预设条件,将第一控件和第二控件合并为第三控件。
在一个可选的实施例中,预设条件包括如下条件中的至少一种:
第一控件类型是辅助类型,第二控件类型是瞄准类型;
第一控件类型是辅助类型,第二控件类型是移动类型;
第一控件类型是移动类型,第二控件类型是瞄准类型;
第一控件类型是移动类型,第二控件类型是状态切换类型;
第一控件类型是状态切换类型,第二控件类型是辅助类型;
第一控件类型是状态切换类型,第二控件类型是瞄准类型。
在一个可选的实施例中,所述显示模块1610,用于更新显示用于替换所述第一控件和所述第二控件的所述第三控件。
在一个可选的实施例中,所述第一控件用于控制第一虚拟对象执行第一动作;
所述第二控件用于控制所述第一虚拟对象执行第二动作;
所述第三控件用于控制所述第一虚拟对象执行所述第一动作和所述第二动作。
在一个可选的实施例中,所述处理模块1630,用于响应于针对第三控件的第一操作,控制第一虚拟对象执行第一动作;响应于针对第三控件的第二操作,控制第一虚拟对象执行第二动作;或,响应于针对第三控件的第三操作,控制第一虚拟对象同时执行第一动作和第二动作;或,响应于针对第三控件的第四操作,获取第一动作和第二动作的优先级;根据优先 级控制第一虚拟对象按照预设顺序执行第一动作和第二动作。
在一个可选的实施例中,所述处理模块1630,用于响应于针对所述第三控件的第一拆分设置操作,将第三控件拆分成第一控件和第二控件;或,响应于针对所述第三控件的第二拆分设置操作,将第三控件拆分成第四控件和第五控件,第四控件和第五控件属于同一类型的控件。
在一个可选的实施例中,所述获取模块1640,用于响应于针对所述第三控件的第一拆分设置操作,获取第三控件对应的动作类型;所述处理模块1630,用于基于动作类型将第三控件拆分成第一控件和第二控件。
在一个可选的实施例中,所述获取模块1640,用于响应于针对所述第三控件的第二拆分设置操作,获取第三控件对应的至少两个动作之间的关联关系;所述处理模块1630,用于基于关联关系将第三控件拆分成第四控件和第五控件。
在一个可选的实施例中,虚拟环境画面是以第一虚拟对象的视角对虚拟环境进行观察得到的画面;
所述获取模块1640,用于响应于第一虚拟对象的视角内包括至少两个第二虚拟对象,获取第二虚拟对象的数量;所述处理模块1630,用于基于第二虚拟对象的数量对第三控件进行拆分。
在一个可选的实施例中,所述显示模块1610,用于更新显示所述第三控件拆分后用于替代所述第三控件的至少两个拆分控件。
在一个可选的实施例中,所述处理模块1630,用于响应于针对第一控件的第一操作,控制第一虚拟对象执行第一动作;响应于针对第二控件的第二操作,控制第一虚拟对象在执行第一动作的同时,执行第二动作;或,响应于针对第四控件的第三操作,控制第一虚拟对象执行第四动作;响应于针对第五控件的第四操作,控制第一虚拟对象执行第五动作,第四动作和第五动作具有关联关系。
综上所述,本实施例提供的装置,通过接收到的合并设置操作,将虚拟环境画面上显示的不同类型的控件进行合并,使得用户可以通过自主设置,将不常用的UI控件合并为同一UI控件,或者将需要结合使用的UI控件合并为同一UI控件,通过改变UI控件在虚拟环境画面上的布局,简化了用户控制虚拟对象的过程,提高了人机交互效率。
通过判断第一控件的第一控件类型和第二控件的第二控件类型是否满足预设条件,将第一控件和第二控件合并为第三控件,使得用户能够通过合并设置操作将不同类型的控件进行合并,从而使得UI控件在虚拟环境画面上的布局更加灵活。
通过将满足预设条件的UI控件的类型进行列举,使得用户确定可合并的UI控件的类型,灵活合并UI控件。
当第一控件和第二控件合并后,更新显示虚拟环境画面,更新后的虚拟环境画面显示有合并后的第三控件,用户可以更加直观地通过更新后的虚拟环境画面控制虚拟对象。
当第三控件上接收到不同的操作时,控制虚拟对象按照不同的规则执行不同的动作,使得控制虚拟对象的方式更加灵活多样,有助于用户设置符合自己喜好或适合自己使用习惯的UI控件的布局。
当接收到拆分设置操作,将第三控件拆分成相同类型的动作或不同类型的控件,使得用户可针对不同的战况有针对性地拆分控件,在符合用户的操作习惯的同时,保证用户控制的虚拟对象提高对战效率。
通过根据第三控件在被触发时对应的动作类型对第三控件进行拆分,使得拆分后的控件可实现不同类型的动作对应的功能,方便用户根据不同的功能对第三控件进行拆分。
通过根据第三控件在被触发时对应的至少两个动作之间的关联关系对第三控件进行拆分,使得拆分后的控件可实现同类动作的不同层级,方便用户根据不同对战情况执行不同层级的动作,有利于提高虚拟对象的作战效率。
通过根据第一虚拟对象视野内的第二虚拟对象的数量来拆分第三控件,使得拆分后的控 件可针对不同的虚拟对象进行攻击,有利于提高虚拟对象的作战效率。
当第三控件拆分后,更新显示虚拟环境画面,更新后的虚拟环境画面显示有拆分后的控件,用户可以更加直观地通过更新后的虚拟环境画面控制虚拟对象。
当拆分后的控件上接收到不同的操作时,可控制虚拟对象执行不同的操作,使得控制虚拟对象的方式更加灵活多样,有助于用户设置符合自己喜好或适合自己使用习惯的UI控件的布局。
需要说明的是:上述实施例提供的虚拟环境画面的显示装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟环境画面的显示装置与虚拟环境画面的显示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图17,其示出了本申请一个示例性实施例提供的计算机设备1700的结构框图。该计算机设备1700可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器。计算机设备1700还可能被称为用户设备、便携式终端等其他名称。
通常,计算机设备1700包括有:处理器1701和存储器1702。
处理器1701可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。
存储器1702可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1702还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1702中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1701所执行以实现本申请实施例中提供的虚拟环境画面的显示方法。
在一些实施例中,计算机设备1700还可选包括有:外围设备接口1703和至少一个外围设备。具体地,外围设备包括:射频电路1704、触摸显示屏1705、摄像头组件1706、音频电路1707、定位组件1708和电源1709中的至少一种。
外围设备接口1703可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1701和存储器1702。
射频电路1704用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1704通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1704将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。
触摸显示屏1705用于显示UI(User Interface,用户界面)。
摄像头组件1706用于采集图像或视频。音频电路1707用于提供用户和计算机设备1700之间的音频接口。音频电路1707可以包括麦克风和扬声器。
定位组件1708用于定位计算机设备1700的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。
电源1709用于为计算机设备1700中的各个组件进行供电。
在一些实施例中,计算机设备1700还包括有一个或多个传感器1710。该一个或多个传感器1710包括但不限于:加速度传感器1711、陀螺仪传感器1712、压力传感器1713、指纹传感器1714、光学传感器1715以及接近传感器1716。
本领域技术人员可以理解,图17中示出的结构并不构成对计算机设备1700的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供一种计算机设备,该计算机设备包括处理器和存储器,该存储器中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、该至少一段程序、 该代码集或指令集由该处理器加载并执行以实现如上述各方法实施例提供的虚拟环境画面的显示方法。
本申请实施例还提供一种计算机可读存储介质,该存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述各方法实施例提供的虚拟环境画面的显示方法。
本申请实施例还提供一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行如上方面所述的虚拟环境画面的显示方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种虚拟环境画面的显示方法,其中,应用于计算机设备,所述方法包括:
    显示虚拟环境画面以及第一控件和第二控件,所述第一控件和所述第二控件属于不同的控件类型;
    接收针对所述第一控件和所述第二控件的合并设置操作;
    响应于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件。
  2. 根据权利要求1所述的方法,其中,所述响应于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件,包括:
    响应于所述合并设置操作获取所述第一控件的第一控件类型和所述第二控件的第二控件类型;
    响应于所述第一控件类型和所述第二控件类型满足预设条件,将所述第一控件和所述第二控件合并为所述第三控件。
  3. 根据权利要求2所述的方法,其中,所述预设条件包括如下条件中的至少一种:
    所述第一控件类型是辅助类型,所述第二控件类型是瞄准类型;
    所述第一控件类型是所述辅助类型,所述第二控件类型是移动类型;
    所述第一控件类型是所述移动类型,所述第二控件类型是所述瞄准类型;
    所述第一控件类型是所述移动类型,所述第二控件类型是状态切换类型;
    所述第一控件类型是所述状态切换类型,所述第二控件类型是所述辅助类型;
    所述第一控件类型是所述状态切换类型,所述第二控件类型是所述瞄准类型。
  4. 根据权利要求1至3任一所述的方法,其中,所述响应于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件之后,还包括:
    更新显示用于替换所述第一控件和所述第二控件的所述第三控件。
  5. 根据权利要求1至3任一所述的方法,其中,
    所述第一控件用于控制第一虚拟对象执行第一动作;
    所述第二控件用于控制所述第一虚拟对象执行第二动作;
    所述第三控件用于控制所述第一虚拟对象执行所述第一动作和所述第二动作。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    响应于针对所述第三控件的第一操作,控制所述第一虚拟对象执行所述第一动作;响应于针对所述第三控件的第二操作,控制所述第一虚拟对象执行所述第二动作;
    或,
    响应于针对所述第三控件的第三操作,控制所述第一虚拟对象同时执行所述第一动作和所述第二动作;
    或,
    响应于针对所述第三控件的第四操作,获取所述第一动作和所述第二动作的优先级;根据所述优先级控制所述第一虚拟对象按照预设顺序执行所述第一动作和所述第二动作。
  7. 根据权利要求1至3任一所述的方法,其中,所述方法还包括:
    响应于针对所述第三控件的第一拆分设置操作,将所述第三控件拆分成所述第一控件和所述第二控件;
    或,
    响应于针对所述第三控件的第二拆分设置操作,将所述第三控件拆分成所述第四控件和所述第五控件,所述第四控件和所述第五控件属于同一类型的控件。
  8. 根据权利要求7所述的方法,其中,所述响应于针对所述第三控件的第一拆分设置操作,将所述第三控件拆分成所述第一控件和所述第二控件,包括:
    响应于针对所述第三控件的第一拆分设置操作,获取所述第三控件对应的动作类型;
    基于所述动作类型将所述第三控件拆分成所述第一控件和所述第二控件。
  9. 根据权利要求7所述的方法,其中,所述响应于针对所述第三控件的第二拆分设置操作,将所述第三控件拆分成所述第四控件和所述第五控件,包括:
    响应于针对所述第三控件的第二拆分设置操作,获取所述第三控件对应的至少两个动作之间的关联关系;
    基于所述关联关系将所述第三控件拆分成所述第四控件和所述第五控件。
  10. 根据权利要求7所述的方法,其中,所述虚拟环境画面是以第一虚拟对象的视角对虚拟环境进行观察得到的画面;
    所述方法还包括:
    响应于所述第一虚拟对象的视角内包括至少两个第二虚拟对象,基于所述第二虚拟对象的数量对所述第三控件进行拆分。
  11. 根据权利要求7所述的方法,其中,所述方法还包括:
    更新显示所述第三控件拆分后用于替代所述第三控件的至少两个拆分控件。
  12. 根据权利要求7所述的方法,其中,所述将所述第三控件拆分成所述第一控件和所述第二控件之后,还包括:
    响应于针对所述第一控件的第一操作,控制所述第一虚拟对象执行第一动作;响应于针对所述第二控件的第二操作,控制所述第一虚拟对象在执行所述第一动作的同时,执行第二动作;
    或,
    所述将所述第三控件拆分成所述第四控件和所述第五控件之后,还包括:
    响应于针对所述第四控件的第三操作,控制所述第一虚拟对象执行第四动作;响应于针对所述第五控件的第四操作,控制所述第一虚拟对象执行第五动作,所述第四动作和所述第五动作具有关联关系。
  13. 一种虚拟环境画面的显示装置,其中,所述装置包括:
    显示模块,用于显示虚拟环境画面以及第一控件和第二控件,所述第一控件和所述第二控件属于不同的控件类型;
    接收模块,用于接收针对所述第一控件和所述第二控件的合并设置操作;
    处理模块,用于响应于所述合并设置操作将所述第一控件和所述第二控件合并为第三控件。
  14. 一种计算机设备,其中,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述指令、所述程序、所述代码集或所述指令集由所述处理器加载并执行以实现如权利要求1至11任一项所述的虚拟环境画面的显示方法。
  15. 一种计算机可读存储介质,其中,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行,以实现如权利要求1至11任一项所述的虚拟环境画面的显示方法。
PCT/CN2021/113710 2020-08-26 2021-08-20 虚拟环境画面的显示方法、装置、设备及存储介质 WO2022042435A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227040140A KR20230007392A (ko) 2020-08-26 2021-08-20 가상 환경 픽처를 디스플레이하기 위한 방법 및 장치, 디바이스, 및 저장 매체
JP2022560942A JP7477640B2 (ja) 2020-08-26 2021-08-20 仮想環境画面の表示方法、装置及びコンピュータプログラム
US17/883,323 US20220379214A1 (en) 2020-08-26 2022-08-08 Method and apparatus for a control interface in a virtual environment
JP2024067203A JP2024099643A (ja) 2020-08-26 2024-04-18 仮想環境画面の表示方法、装置及びコンピュータプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010870309.1 2020-08-26
CN202010870309.1A CN111921194A (zh) 2020-08-26 2020-08-26 虚拟环境画面的显示方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/883,323 Continuation US20220379214A1 (en) 2020-08-26 2022-08-08 Method and apparatus for a control interface in a virtual environment

Publications (1)

Publication Number Publication Date
WO2022042435A1 true WO2022042435A1 (zh) 2022-03-03

Family

ID=73305521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113710 WO2022042435A1 (zh) 2020-08-26 2021-08-20 虚拟环境画面的显示方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20220379214A1 (zh)
JP (2) JP7477640B2 (zh)
KR (1) KR20230007392A (zh)
CN (1) CN111921194A (zh)
WO (1) WO2022042435A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111921194A (zh) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN112402960B (zh) * 2020-11-19 2022-11-04 腾讯科技(深圳)有限公司 虚拟场景中状态切换方法、装置、设备及存储介质
CN113398564B (zh) * 2021-07-12 2024-02-13 网易(杭州)网络有限公司 虚拟角色控制方法、装置、存储介质及计算机设备
CN113476823B (zh) * 2021-07-13 2024-02-27 网易(杭州)网络有限公司 虚拟对象控制方法、装置、存储介质及电子设备
CN113926181A (zh) * 2021-10-21 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景的对象控制方法、装置及电子设备
CN118767426A (zh) * 2023-04-21 2024-10-15 网易(杭州)网络有限公司 虚拟对象的控制方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549888A (zh) * 2015-12-15 2016-05-04 芜湖美智空调设备有限公司 组合控件生成方法和装置
CN106126064A (zh) * 2016-06-24 2016-11-16 乐视控股(北京)有限公司 一种信息处理方法及设备
CN109078326A (zh) * 2018-08-22 2018-12-25 网易(杭州)网络有限公司 游戏的控制方法和装置
WO2019201047A1 (zh) * 2018-04-16 2019-10-24 腾讯科技(深圳)有限公司 在虚拟环境中进行视角调整的方法、装置及可读存储介质
CN111209000A (zh) * 2020-01-08 2020-05-29 网易(杭州)网络有限公司 自定义控件的处理方法、装置、电子设备及存储介质
CN111921194A (zh) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7749089B1 (en) * 1999-02-26 2010-07-06 Creative Kingdoms, Llc Multi-media interactive play system
JP5137932B2 (ja) * 2009-11-17 2013-02-06 株式会社ソニー・コンピュータエンタテインメント 通信システム、端末装置、通信処理方法、通信処理プログラム、通信処理プログラムが記憶された記憶媒体、拡張機器
KR102109054B1 (ko) * 2013-04-26 2020-05-28 삼성전자주식회사 애니메이션 효과를 제공하는 사용자 단말 장치 및 그 디스플레이 방법
KR101866198B1 (ko) * 2016-07-06 2018-06-11 (주) 덱스인트게임즈 터치스크린 기반의 게임제공방법 및 프로그램
US20190282895A1 (en) * 2018-03-13 2019-09-19 Microsoft Technology Licensing, Llc Control sharing for interactive experience
CN109701274B (zh) * 2018-12-26 2019-11-08 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备
US20200298110A1 (en) * 2019-03-20 2020-09-24 Eric Alan Koziel Universal Game Controller Remapping Device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549888A (zh) * 2015-12-15 2016-05-04 芜湖美智空调设备有限公司 组合控件生成方法和装置
CN106126064A (zh) * 2016-06-24 2016-11-16 乐视控股(北京)有限公司 一种信息处理方法及设备
WO2019201047A1 (zh) * 2018-04-16 2019-10-24 腾讯科技(深圳)有限公司 在虚拟环境中进行视角调整的方法、装置及可读存储介质
CN109078326A (zh) * 2018-08-22 2018-12-25 网易(杭州)网络有限公司 游戏的控制方法和装置
CN111209000A (zh) * 2020-01-08 2020-05-29 网易(杭州)网络有限公司 自定义控件的处理方法、装置、电子设备及存储介质
CN111921194A (zh) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
JP7477640B2 (ja) 2024-05-01
US20220379214A1 (en) 2022-12-01
JP2024099643A (ja) 2024-07-25
CN111921194A (zh) 2020-11-13
KR20230007392A (ko) 2023-01-12
JP2023523157A (ja) 2023-06-02

Similar Documents

Publication Publication Date Title
WO2022042435A1 (zh) 虚拟环境画面的显示方法、装置、设备及存储介质
JP7476235B2 (ja) 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム
WO2022151946A1 (zh) 虚拟角色的控制方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2022057529A1 (zh) 虚拟场景中的信息提示方法、装置、电子设备及存储介质
WO2022134980A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN111399639B (zh) 虚拟环境中运动状态的控制方法、装置、设备及可读介质
CN112416196B (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN113440846B (zh) 游戏的显示控制方法、装置、存储介质及电子设备
WO2022105362A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及计算机程序产品
WO2021238870A1 (zh) 信息显示方法、装置、设备及存储介质
JP7492611B2 (ja) バーチャルシーンにおけるデータ処理方法、装置、コンピュータデバイス、及びコンピュータプログラム
WO2022052831A1 (zh) 应用程序内的控件位置调整方法、装置、设备及存储介质
CN114225372B (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
CN112402960A (zh) 虚拟场景中状态切换方法、装置、设备及存储介质
WO2022227958A1 (zh) 虚拟载具的显示方法、装置、设备以及存储介质
WO2023010690A1 (zh) 虚拟对象释放技能的方法、装置、设备、介质及程序产品
CN113546422B (zh) 虚拟资源的投放控制方法、装置、计算机设备及存储介质
WO2023134272A1 (zh) 视野画面的显示方法、装置及设备
KR20220098355A (ko) 가상 대상 상호작용 모드를 선택하기 위한 방법 및 장치, 디바이스, 매체, 및 제품
CN111249726B (zh) 虚拟环境中虚拟道具的操作方法、装置、设备及可读介质
KR20210144786A (ko) 가상 환경 픽처를 디스플레이하기 위한 방법 및 장치, 디바이스, 및 저장 매체
KR20230042517A (ko) 연락처 정보 디스플레이 방법, 장치 및 전자 디바이스, 컴퓨터-판독가능 저장 매체, 및 컴퓨터 프로그램 제품
CN111589129B (zh) 虚拟对象的控制方法、装置、设备及介质
WO2022170892A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及程序产品
CN115645923A (zh) 游戏交互方法、装置、终端设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860272

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022560942

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227040140

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04-07-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21860272

Country of ref document: EP

Kind code of ref document: A1