CN113101634A - Virtual map display method and device, electronic equipment and storage medium - Google Patents

Virtual map display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113101634A
CN113101634A CN202110420230.3A CN202110420230A CN113101634A CN 113101634 A CN113101634 A CN 113101634A CN 202110420230 A CN202110420230 A CN 202110420230A CN 113101634 A CN113101634 A CN 113101634A
Authority
CN
China
Prior art keywords
virtual
virtual object
map
scene
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110420230.3A
Other languages
Chinese (zh)
Other versions
CN113101634B (en
Inventor
李光
刘超
王翔宇
彭鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110420230.3A priority Critical patent/CN113101634B/en
Publication of CN113101634A publication Critical patent/CN113101634A/en
Application granted granted Critical
Publication of CN113101634B publication Critical patent/CN113101634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat

Abstract

The application provides a virtual map display method, a virtual map display device, electronic equipment and a storage medium, wherein the method comprises the following steps: and responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object. Based on the scheme, the character identification of each virtual object can be displayed according to the uploaded position mark information of the virtual object controlled by each player in the game scene, the information uploading process is quick, so that the position information of the virtual object controlled by each player in the game scene is transmitted and displayed in an efficient and clear mode, the player can be assisted to carry out faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.

Description

Virtual map display method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of human-computer interaction, in particular to a virtual map display method and device, electronic equipment and a storage medium.
Background
With the continuous development of the game industry, the game types are continuously expanded, wherein the reasoning game is popular with the players due to the unique charm. The game requires a plurality of players to participate in interaction, and players belonging to different camps carry out reasoning and voting while completing specified tasks.
During the game discussion phase, the player needs to obtain basic information for reasoning judgment and discussion, for example, the basic information may include: who initiated the discussion, who was killed, where the corpse was located, where each player's location was, etc. However, since there are many players involved, it is difficult to remember the behavior description and position statement of each player in the game scene, and the positions of the players in the game scene may be confused, which tends to make the game efficiency of the players low.
Disclosure of Invention
In view of the above, an object of the present application is to provide a virtual map display method, an apparatus, an electronic device, and a storage medium, in which the character identifier of each virtual object is displayed by uploading position mark information of the virtual object manipulated by each player in a game scene, and the process of uploading the information is fast, so that the position information of the virtual object manipulated by each player in the game scene is efficiently and clearly transmitted and displayed.
In a first aspect, an embodiment of the present application provides a virtual map display method, where a terminal device provides a graphical user interface, and at least a part of a virtual scene and a first virtual object are displayed on the graphical user interface, and the virtual map display method includes:
in response to the movement operation of the first virtual object, controlling the first virtual object to move in a first virtual scene, and controlling a range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of the at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object.
Preferably, the step of displaying the role identifier of the at least one second virtual object and/or the first virtual object according to the position mark information reported by the at least one second virtual object and/or the first virtual object includes:
determining initial display positions of the role identifications according to the position mark information, and determining final display positions of the role identifications according to distances among the initial display positions;
and displaying the role identification of the at least one second virtual object and/or the first virtual object according to the final display position.
Preferably, a virtual map corresponding to the virtual scene is included in the position marking interface.
Preferably, the position mark information reported by the first virtual object is determined by the following method:
displaying a position reporting prompt identifier at a map position corresponding to the actual position of the virtual map and the first virtual object in the virtual scene;
and generating position marking information of the first virtual object determined according to the position reporting prompt identifier in response to a position reporting trigger operation aiming at the virtual map, wherein the position marking information comprises the map position of the position reporting prompt identifier in the virtual map.
Preferably, the step of generating, in response to a position reporting trigger operation for the virtual map, position mark information of the first virtual object determined according to the position reporting prompt identifier includes:
determining the actual position of the first virtual object in the virtual scene as position marking information of the first virtual object in response to the triggering operation of a position reporting control displayed on the virtual map;
alternatively, in response to a location selection operation performed on the virtual map, the selected location on the virtual map is determined as the location marker information of the first virtual object.
Preferably, the virtual map display method further includes: constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein a coordinate position in the two-dimensional coordinate grid and a map position in the virtual map have a corresponding relation;
the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position marking interface includes at least one of the following steps:
determining a first target grid intersection which is nearest to the map position corresponding to the initial display position and is not occupied in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection in the virtual map according to the corresponding relation to be used as a final display position; and the combination of (a) and (b),
and determining a second target grid intersection which is nearest to the map position corresponding to the initial display position and is not occupied in the direction corresponding to the map position of the actual position of the virtual object corresponding to the initial display position in the virtual scene in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the second target grid intersection in the virtual map according to the corresponding relation to be used as a final display position.
Preferably, the character identifier occupies a corresponding display area in the position marking interface, and identity information indicating the identity of the virtual object is displayed in the character identifier,
in the role identifiers displayed in the position marking interface, a preset gap is formed between the identity information displayed in the role identifiers of the two adjacent virtual objects, and the display areas occupied by the role identifiers of the two adjacent virtual objects are not overlapped or the edges of the display areas are covered.
Preferably, the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position marking interface includes:
in response to detecting that the distance between any two adjacent initial display positions is smaller than a preset gap, adjusting at least one of the two adjacent initial display positions to determine a final display position of the character identifier.
Preferably, the character identification is displayed in the position-marking interface by:
and displaying the role identification at the final display position corresponding to each role identification in the position marking interface, and connecting the role identification of each virtual object with the initial display position corresponding to each virtual object.
Preferably, the character identifier is displayed in the position mark interface by at least one of:
in response to a zoom-in display skill triggering operation for the position marking interface, zooming in and displaying a target area in the virtual map and/or a role identification of each virtual object in the target area;
reducing the display size of the character identifier of each virtual object in the position mark interface;
and changing the expression form of the identity information which is displayed in the role identification and used for indicating the identity of the virtual object.
Preferably, the character identification is displayed in the position-marking interface by:
displaying a display strategy control on the position marking interface;
responding to the trigger operation aiming at the display strategy control, and determining a display mode aiming at the role identification;
displaying the character identification of each virtual object in the position mark interface in the determined display mode.
Preferably, the display mode comprises at least one of the following items:
determining the coverage relation of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object;
and determining the coverage relation of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
In a second aspect, an embodiment of the present application provides a virtual map display apparatus, which provides a graphical user interface through a terminal device, and includes:
a first display control module for displaying at least part of a virtual scene and a first virtual object on the graphical user interface;
the movement control module is used for responding to the movement operation of the first virtual object, controlling the first virtual object to move in a first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
the second display control module is used for responding to a preset trigger event and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and the third display control module is used for responding to touch operation aiming at a function control, displaying a position mark interface in the graphical user interface, and displaying the role identification of the at least one second virtual object and/or the first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or the first virtual object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine-readable instructions to execute the steps of the virtual map display method.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the virtual map display method as described above.
The virtual map display method provided by the embodiment of the application comprises the following steps: responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; responding to a preset trigger event, and controlling a virtual scene displayed in a graphical user interface to be switched from a first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object; and responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object.
According to the virtual map display method, the character identification of each virtual object can be displayed according to the uploaded position mark information of the virtual object controlled by each player in the game scene, the information uploading process is rapid, the position information of the virtual object controlled by each player in the game scene is transmitted and displayed in a high-efficiency and clear mode, information statement of each player in the game discussion stage is reduced, the memory burden of the player is relieved, meanwhile, the player is assisted to carry out faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a virtual map display method according to an embodiment of the present disclosure;
fig. 2 is a schematic interface diagram corresponding to a discussion phase provided in an embodiment of the present application;
fig. 3 is a schematic interface diagram of a first role identification display provided in an embodiment of the present application;
fig. 4 is a schematic interface diagram of a second role identification display provided in an embodiment of the present application;
fig. 5 is a schematic interface diagram of a third role identification display provided in the embodiment of the present application;
fig. 6 is a schematic interface diagram of a first virtual scene according to an embodiment of the present disclosure;
fig. 7 is one of schematic interface diagrams of a second virtual scene according to an embodiment of the present disclosure;
fig. 8 is a second schematic interface diagram of a first virtual scene according to an embodiment of the present disclosure;
fig. 9 is a third schematic interface diagram of a first virtual scene according to an embodiment of the present disclosure;
fig. 10 is a second schematic interface diagram of a second virtual scene according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating movement of a virtual object according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a virtual map display apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
With the continuous development of the game industry, the game types are continuously expanded. Of these, inference-based games are enjoyed by more and more players with their unique appeal. This type of game requires multiple players to participate in the interaction, and players who run through different teams can perform inferential voting while completing a given task.
Virtual scene:
is a virtual scene that an application program displays (or provides) when running on a terminal or server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, sea and the like, wherein the land comprises environmental elements such as deserts, cities and the like. The virtual scene is a scene of a complete game logic of a virtual object such as a user control.
Virtual object:
refers to a dynamic object that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a Character controlled by a Player through an input device, or an Artificial Intelligence (AI) set in a virtual environment match-up through training, or a Non-Player Character (NPC) set in a virtual scene match-up. Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present application. In one possible implementation, the user can control the virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., and can also control the virtual object to fight against other virtual objects using skills, virtual props, etc., provided by the application.
The player character:
refers to a virtual object that can be manipulated by a player to move in a game environment, and in some electronic games, can also be called a god character or a hero character. The player character may be at least one of different forms of a virtual character, a virtual animal, an animation character, a virtual vehicle, and the like.
A game interface:
the interface is provided or displayed through a graphical user interface, and the interface comprises a UI interface and a game picture for a player to interact. In alternative embodiments, game controls (e.g., skill controls, movement controls, functionality controls, etc.), indicators (e.g., directional indicators, character indicators, etc.), information presentation areas (e.g., number of clicks, game play time, etc.), or game setting controls (e.g., system settings, stores, coins, etc.) may be included in the UI interface. In an optional embodiment, the game screen is a display screen corresponding to a virtual scene displayed by the terminal device, and the game screen may include virtual objects such as a game character, an NPC character, and an AI character that execute a game logic in the virtual scene.
Virtual object:
refers to static objects in a virtual scene, such as terrain, houses, bridges, vegetation, etc. in a game scene. Static objects are often not directly controlled by the player, but may behave accordingly in response to the interaction behavior (e.g., attack, tear down, etc.) of the virtual objects in the scene, such as: the virtual object may be demolished, picked up, dragged, built, etc. of the building. Alternatively, the virtual object may not respond to the interaction behavior of the virtual object, for example, the virtual object may also be a building, a door, a window, a plant, etc. in the game scene, but the virtual object cannot interact with the virtual object, for example, the virtual object cannot destroy or remove the window, etc.
The application discloses that the virtual map display method in one embodiment can be operated on a terminal device or a server. The terminal device may be a local terminal device. When the virtual map display method is operated on a server, the virtual map display method can be implemented and executed based on a cloud interactive system, wherein the cloud interactive system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the information processing method are completed on a cloud game server, and the client equipment is used for receiving and sending data and presenting the game picture; however, the terminal device performing the information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
By way of example only, it is possible to illustrate,
Figure BDA0003027605570000101
the game is one of reasoning games, and is popular among the players. In the inference type game, a plurality of players participating in the game join the same game match together, after the game match is entered, different character attributes, such as identity attributes, are allocated to virtual objects of different players, different camps are determined by allocating the different character attributes, and the players win game competition by executing tasks allocated to the game at different game match stages of the game match. For example, a plurality of virtual objects having an a-character attribute may win a game tournament by "culling" the virtual objects having a B-character attribute during a game tournament stage. To be provided with
Figure BDA0003027605570000102
For example, 10 people are usually required to participate in the same game of play, and at the beginning of the game of play, the identity information of the virtual object in the game of play is determined(character attribute), for example, the identification information includes a citizen identification and a wolf identification, the virtual object having the citizen identification wins the match by completing the assigned designated task in the match-up stage, or the virtual object having the wolf identification in the current game match-up is eliminated to win the match; the virtual object with the wolf identity carries out attack actions on other virtual objects with the non-wolf identity in the game stage so as to eliminate the virtual object and win the game.
In the game-play stage in inference-type games, there are generally two game stages: an action phase and a discussion phase.
In the action phase, each virtual object is typically assigned one or more game tasks. In an optional embodiment, each virtual object is assigned with one or more corresponding game tasks, and the player controls the corresponding virtual object to move in the game scene and execute the corresponding game task to complete game play. In an alternative embodiment, a common game task is determined for virtual objects with the same character attribute in the current game play; in the action phase, virtual objects participating in the current game play can freely move to different areas in the game scene in the action phase virtual scene to complete the allocated game tasks, wherein the virtual objects in the current game play comprise virtual objects with a first role attribute and virtual objects with a second role attribute, and in an optional implementation mode, when the virtual objects with the second role attribute move to a preset range of the virtual objects with the first role attribute in the virtual scene, the virtual objects with the first role attribute can be attacked in response to an attack instruction to eliminate the virtual objects with the first role attribute.
In the discussion phase, a discussion function is provided for the virtual object representing the player, and the behavior of the virtual object in the action phase is shown through the discussion function so as to decide whether to eliminate the current game from the specific virtual object in the game.
To be provided with
Figure BDA0003027605570000111
For example, a game play includes two phases, an action phase and a discussion phase. In the action phase, a plurality of virtual objects in the game play freely move in the virtual scene, and other virtual objects appearing in the preset range can be seen in the game picture displayed through the visual angle of the virtual objects. The virtual object with the citizen identity moves in the virtual scene to complete the distributed game task, the virtual object with the wolf person identity destroys the task which is completed by the virtual object with the citizen identity in the virtual scene, or can execute the distributed specific game task, and meanwhile, the virtual object with the wolf person identity can attack the virtual object with the citizen identity in the action stage to eliminate the virtual object. When the game-playing stage enters the discussion stage from the action stage, the player discusses through the corresponding virtual object to try to determine the virtual object with the wolf identity according to the game behavior in the action stage, determines the discussion result in a voting mode, determines whether the virtual object needing to be eliminated exists according to the discussion result, if so, eliminates the corresponding virtual object according to the discussion result, and if not, does not exist the virtual object needing to be eliminated in the current discussion stage. In the discussion phase, the discussion can be performed by voice, text, or other means.
During the game discussion phase, the player needs to obtain basic information for reasoning judgment and discussion, which may include, but is not limited to: who initiated the discussion, who was killed, where the corpse was located, where each player's location was, etc. However, since there are many players involved, it is difficult to remember the behavior description and position statement of each player in the game scene, and the positions of the players in the game scene may be confused, which results in low game efficiency of the players.
Based on this, the embodiment of the application provides a virtual map display method, which displays the character identification of each virtual object by uploading the position mark information of the virtual object operated by each player in the game scene, and the process of uploading the information is quick, so that the position information of the virtual object operated by each player in the game scene is transmitted and displayed in an efficient and clear manner, which is beneficial to reducing the information statement of each player in the game discussion stage, and is also beneficial to reducing the memory burden of the player, and meanwhile, the player is assisted to perform faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
One embodiment of the present application provides an implementation environment, which may include: the game server comprises a first terminal device, a game server and a second terminal device. The first terminal device and the second terminal device are respectively communicated with the server to realize data communication. In this embodiment, the first terminal device and the second terminal device are respectively equipped with a client terminal for executing the display method of the game progress provided by the present application, and the game server is a server terminal for executing the display method of the game progress provided by the present application. And the first terminal equipment and the second terminal equipment can respectively communicate with the game server through the client.
Taking the first terminal device as an example, the first terminal device establishes communication with the game server by running the client. In an alternative embodiment, the server establishes the game pair based on the game request from the client. The parameters of the game play can be determined according to the parameters in the received game request, for example, the parameters of the game play can include the number of people participating in the game play, the level of characters participating in the game play, and the like. And when the first terminal equipment receives the response of the server, displaying the virtual scene corresponding to the game play through the graphical user interface of the first terminal equipment. In an optional implementation manner, the server determines a target game play for the client from a plurality of established game plays according to a game request of the client, and when the first terminal device receives a response of the server, displays a virtual scene corresponding to the game play through a graphical user interface of the first terminal device. The first terminal device is controlled by a first user, the virtual object displayed in the graphical user interface of the first terminal device is a player character controlled by the first user, and the first user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in a virtual scene.
Taking the second terminal device as an example, the second terminal device establishes communication with the game server by operating the client. In an alternative embodiment, the server establishes the game pair based on the game request from the client. The parameters of the game play can be determined according to the parameters in the received game request, for example, the parameters of the game play can include the number of people participating in the game play, the level of characters participating in the game play, and the like. And when the second terminal equipment receives the response of the server, displaying the virtual scene corresponding to the game play through the graphical user interface of the second terminal equipment. In an optional implementation manner, the server determines a target game play for the client from a plurality of established game plays according to a game request of the client, and when the second terminal device receives a response from the server, displays a virtual scene corresponding to the game play through a graphical user interface of the second terminal device. The second terminal device is controlled by a second user, the virtual object displayed in the graphical user interface of the second terminal device is a player character controlled by the second user, and the second user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in the virtual scene.
The server performs data calculation according to game data reported by the first terminal device and the second terminal device, and synchronizes the calculated game data to the first terminal device and the second terminal device, so that the first terminal device and the second terminal device control rendering of a corresponding virtual scene and/or a corresponding virtual object in a graphical user interface according to the synchronization data issued by the server.
In the present embodiment, the virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device are virtual objects in the same game play. The virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device may have the same role attribute or different role attributes.
It should be noted that the virtual object in the current game play may include two or more virtual characters, and different virtual characters may correspond to different terminal devices, that is, in the current game play, there are two or more terminal devices that respectively perform game data transmission and synchronization with the game server.
Referring to fig. 1, fig. 1 is a flowchart illustrating a virtual map display method according to an embodiment of the present disclosure. As shown in fig. 1, in the embodiment of the present application, a terminal device provides a graphical user interface, where at least a part of a virtual scene and a first virtual object are displayed on the graphical user interface, and a virtual map display method includes:
and S110, responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
And S120, responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to the second virtual scene. Here, the second virtual scene includes at least one second virtual object therein.
S130, responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object.
The terminal device related to the embodiment of the present application mainly refers to an intelligent device that is used for providing a graphical user interface and can control and operate a virtual object, and the terminal device may include, but is not limited to, any one of the following devices: smart phones, tablet computers, laptop computers, desktop computers, digital televisions, game consoles, and the like. The terminal device has installed and operated therein an application program supporting a game, such as an application program supporting a three-dimensional game or a two-dimensional game. In the embodiment of the present application, an application program is introduced as a game application, and optionally, the application program may be a network online version game application program or a stand-alone version game application program.
The graphic user interface is an interface display format for human-computer communication, which allows a user to manipulate icons or menu options on a screen using an input device such as a mouse, a keyboard, a joystick, etc., and also allows a user to manipulate icons or menu options on a screen by performing a touch operation on a touch screen of a terminal device to select a command, start a program, perform some other task, etc.
And after responding to the opening operation of the game player, the game client on the terminal equipment displays at least part of the virtual scene and the first virtual object positioned in the virtual scene in the graphical user interface. The opening operation can include an operation of clicking the application on the computer terminal through a mouse, and can also include an operation of clicking or sliding the game APP on the mobile terminal through a touch screen, or opening the game client terminal through voice input.
In the embodiment of the present application, the virtual scene may include the above-mentioned virtual scene corresponding to the action phase of the inference class game, and the virtual object manipulated by each player may move in the virtual scene in the action phase, for example, the movement of the virtual object in the virtual scene may include, but is not limited to, at least one of the following: walking, running, jumping, climbing, lying down, attacking, skill releasing, prop picking up, message sending. Here, the virtual objects active in the virtual scene may include other non-player-manipulated virtual objects in addition to the virtual objects manipulated by the respective players. In addition, the virtual scene may also include the above-mentioned virtual scene corresponding to the discussion phase of the inference class game, in which inference voting can be performed for each player.
The first virtual object may refer to a virtual object in a game, which is logged into an account of a game client on the terminal device, that is, a virtual object manipulated by a player corresponding to the account, but does not exclude the possibility that the first virtual object is controlled by another application or an artificial intelligence module.
The above exemplary steps provided by the embodiments of the present application are described below by taking the application of the above method to a terminal device as an example.
In step S110, in response to a movement operation of the game player for the first virtual object, the first virtual object is controlled to move in the first virtual scene, and the range of the first virtual scene displayed in the graphical user interface is controlled to change correspondingly according to the movement of the first virtual object.
The moving operation in the embodiment of the application is issued to the terminal device by a game player, and the moving operation is used for controlling the first virtual object to move in the first virtual scene of the graphical user interface. The terminal device responds to the moving operation, can control the first virtual object to move in the first virtual scene, and along with the movement of the first virtual object, the position of the first virtual object in the first virtual scene changes correspondingly, that is, the terminal device responds to the moving operation, and can also control the range of the first virtual scene displayed in the graphical user interface to change correspondingly according to the movement of the first virtual object.
For example, the game screen displayed in the graphical user interface may be a screen obtained by observing the first virtual scene with the first virtual object as the observation center, and when the first virtual object in the first virtual scene is manipulated to move, the game screen moves along with the movement, that is, the observation center of the game screen is bound to the position of the first virtual object, so that the observation center moves along with the movement of the position of the first virtual object. However, the present invention is not limited to this, and other observation positions in the virtual scene may be used as the observation centers, as long as the first virtual object is included in the displayed first virtual scene and the range of the displayed first virtual scene changes in accordance with the movement of the first virtual object.
For example, the process of controlling the movement of the first virtual object in the first virtual scene may include: receiving a selection operation of a game player for a first virtual object, and controlling the first virtual object to move in a first virtual scene in response to a dragging operation for the selected first virtual object; alternatively, a selection operation of the first virtual object by the game player may be received, and the first virtual object may be controlled to move to the selected position in response to a position selection operation performed in the first virtual scene. By way of example, the move operation may include, but is not limited to: on the computer end, clicking the first virtual object through a left mouse button without loosening, and dragging the mouse to change the position of the first virtual object in the first virtual scene; or on the mobile terminal, the first virtual object is not loosened by long-pressing with a finger, and the position of the first virtual object in the first virtual scene is changed by sliding the finger on the graphical user interface.
Furthermore, as the range of the first virtual scene changes according to the movement of the first virtual object, the changed first virtual scene may include a second virtual object (here, the first virtual scene before the change may also include the second virtual object), where the second virtual object is a virtual object controlled by another player in the current game play. Similarly, the terminal devices of other players respond to the movement operation given by the terminal devices, and can also control the second virtual object to move in the first virtual scene, and along with the movement of the second virtual object, the position of the second virtual object in the first virtual scene also changes correspondingly.
In the embodiment of the application, the first virtual object can execute the task specified by the system in the action phase so as to achieve the purpose of completing the task winning; the same is true for the second virtual object in the action phase. If the first virtual object is a virtual object with a second role attribute and the second virtual object is a virtual object with a first role attribute, the first virtual object can be confused when the second virtual object executes a task, or the second virtual object is eliminated, or a task formulated for the virtual object with the second role attribute is completed in the process that the second virtual object executes the task; if the first virtual object is a virtual object with a first role attribute and the second virtual object is a virtual object with a second role attribute, the same process as above can be executed; if the first virtual object and the second virtual object are both virtual objects with the first role attribute, the first virtual object and the second virtual object can execute tasks together or respectively; if the first virtual object and the second virtual object are both virtual objects with the second role attribute, the first virtual object and the second virtual object can jointly or respectively search for the virtual object with the first role attribute so as to interfere the virtual object with the first role attribute to execute a task, kill the virtual object with the first role attribute, or complete a task formulated for the virtual object with the second role attribute.
In step S120, the trigger event refers to an event for triggering a virtual scene change. In an alternative embodiment, the triggering event is a preset range of a second virtual object that controls the first virtual object to move to a specific state in the virtual scene, for example, when there is a second virtual object in a "dead" state in the virtual scene, the first virtual object is controlled to move to the periphery of the second virtual object in the "dead" state in the virtual scene. In an optional embodiment, the triggering event is a switching operation for triggering switching from the first virtual scene to the second virtual scene, and the switching operation may include, but is not limited to: and operating a return option displayed on the graphical user interface to exit the first virtual scene and return to display the second virtual scene, or operating a start option displayed on the graphical user interface to switch from the first virtual scene to the second virtual scene. The graphical user interface is switched from a first virtual scene to a second virtual scene by responding to a preset trigger event, and the second virtual scene comprises a first virtual object and at least one second virtual object, and can also comprise a third virtual object.
For example, the virtual object displayed in the second virtual scene may refer to a character model of the second virtual object, or may be a character icon of the second virtual object, and similarly, the character model of the first virtual object and/or the third virtual object may also be displayed in the second virtual scene, or the character icon of the first virtual object and/or the third virtual object may also be displayed. The display mode for the virtual object in the first virtual scene is similar to the display mode for the virtual object in the second virtual scene, which is not described herein again.
In this embodiment, the second virtual scene may be a virtual scene corresponding to the discussion phase mentioned above. In the discussion phase, the terminal device responds to the ending operation of the action phase or the opening operation of the discussion phase and enters the discussion phase; in the discussion phase, the virtual object with the first character attribute and the virtual object with the second character attribute in the live state in the action phase and the virtual object with the first character attribute and the virtual object with the second character attribute in the obsolete state can be displayed in the second virtual scene, wherein the virtual object with the first character attribute and the virtual object with the second character attribute in the obsolete state can not be voted in the phase.
In an optional implementation manner, the terminal device obtains position mark information reported by the first virtual object and/or the at least one second virtual object in the local game where the first virtual object is located.
In an optional implementation manner, in response to a touch operation for the function control, the terminal device obtains position mark information reported by the first virtual object and/or the at least one second virtual object in the local game where the first virtual object is located.
It should be understood that the step of obtaining the position mark information reported by the first virtual object and/or the at least one second virtual object in the game where the first virtual object is located may be performed before step S130; or after step S130; or after the step of "responding to the touch operation for the function control" in step S130.
In an optional implementation manner, the position mark information may be actively reported by the first virtual object and/or the at least one second virtual object when a certain condition is reached, or the terminal device may respond to a certain condition, so as to actively acquire the position mark information of the first virtual object and/or the at least one second virtual object.
For example, the position mark information reported by the first virtual object and/or the at least one second virtual object in the local game where the first virtual object is located may be obtained in response to any of the following conditions: (1) detecting entry into a second virtual scene; (2) detecting that the living state of any virtual object in the virtual environment changes; (3) detecting a position reporting request initiated by any virtual object in the game; (4) touch operation directed to a function control displayed on a graphical user interface is detected.
For the condition (1), the terminal device obtains the position mark information reported by the first virtual object and/or the at least one second virtual object in the current game match in response to switching from the first virtual scene to the second virtual scene. That is, the terminal device acquires the position mark information of each virtual object in response to entering the voting stage. For example, when a certain game player suddenly initiates a discussion entering a discussion phase (a second virtual scene), since the discussion needs to be performed with reference to the location information of each game player in the first virtual scene, each game player needs to report the location mark information in the first virtual scene before entering the second virtual scene when entering the second virtual scene. Whether each game player reports the position mark information in the first virtual scene or reports the real position mark information is not limited by force.
For the condition (2), the terminal device acquires the position mark information reported by the first virtual object and/or the at least one second virtual object in the current game match in response to detecting that the survival state of any virtual object in the virtual environment changes. For example, if a virtual object controlled by a certain player dies in a first virtual scene, the position mark information of the dead player at the dead position is automatically reported to the game server, the game server shares the position mark information of the dead virtual object at the dead position with other players in the current game play pair, so that the other players can see the dead position of the dead player in the first virtual scene, and at this time, the other players can report their position mark information based on the position sharing, so as to assist the reasoning discussion of each player in the discussion phase. Similarly, it is not mandatory that each player report its own position marker information based on the above position sharing.
For the condition (3), in response to a position report request initiated for any virtual object in the game of this game, position information reported by the first virtual object and/or the at least one second virtual object in the game of this game where the first virtual object is located is obtained. For example, taking an example that a first virtual object initiates a position reporting request, at this time, a terminal device may respond to the position reporting request initiated by the first virtual object to obtain position mark information reported by the first virtual object, and at the same time, the terminal device may further send the position reporting request to a game server, and the game server shares the position reporting request to other players in the current game session to obtain position mark information reported by a second virtual object operated by the other players.
For the condition (4), in response to a touch operation for a function control displayed on a graphical user interface, position mark information reported by a first virtual object and/or at least one second virtual object in the local game where the first virtual object is located is obtained.
Here, the functional control refers to a control for reporting a position of a virtual object in the first virtual scene, and may be a control having a word of "report" or "position report" displayed on the graphical user interface, or may be a control having a word of "map" displayed, for example. For example, the functionality control may be a circular control, a square control, or an irregular graphical control, and the functionality control may be disposed on the graphical user interface proximate to the boundary. At any time, the player can perform position reporting operation through the function control.
Illustratively, after a player reports position mark information, a reminding mark is arranged on the functional control, and the reminding mark is used for reminding the player of reporting the position mark information in the game process, for example, the reminding mark can be a small red dot; after seeing the reminding mark, other players can respond to the touch operation of the functional control, so that a graphical user interface after the position mark information reported by the players is displayed is opened. At this time, the player may view the position information of the player in the virtual scene, which has reported the position mark information before, may report the position information of the player in the virtual scene at the same time, or may simply view the position information of other players without reporting the position information of the player. The touch operation may be a sliding operation or a clicking operation, that is, in response to the sliding operation or the clicking operation for the function control, the position mark information reported by the at least one second virtual object and/or the first virtual object may be obtained.
In step S330, in response to the touch operation for the function control, a position mark interface is displayed in the graphical user interface, and in the position mark interface, the role identifier of at least one second virtual object and/or the first virtual object is displayed according to the position mark information reported by the at least one second virtual object and/or the first virtual object. For example, after the touch operation for the function control is performed, a position mark interface may be displayed in the graphical user interface, and the corresponding character identifier may be displayed in the position mark interface according to the position mark information of each virtual object.
Here, the location marker interface refers to an interface capable of displaying a role identifier corresponding to location marker information reported by each virtual object. In one embodiment, a virtual map corresponding to the virtual scene is included in the location tagging interface. The virtual map can selectively represent the graphics or images on the virtual scene in the game on a plane or a spherical surface in a two-dimensional or multi-dimensional form and means, and reflect the distribution characteristics and the mutual relations in the virtual scene in a certain proportion.
Role identification refers to the ability to display identity information indicating the identity of a virtual object. Specifically, the display mode of the role identifier can be divided into two types: the first method is to screen and subtract the original identity information of the virtual object, and only display a part of the content capable of representing the identity information, such as key information in a display name; the second method is to represent the identity information of the virtual object by using a label, a character or a color, such as a number (1, 2, 3 …), a letter (a, b, c …), a color (red, yellow, blue …), etc., wherein when the label, the character or the color is used to represent the identity information of the virtual object, an identity information cross-reference list may be displayed on one side of the graphical user interface, and the identity information cross-reference list may indicate the identity information of the virtual object corresponding to the number 1, 2, 3 …, respectively; or identity information indicating the virtual objects corresponding to a, b and c … respectively; or indicating the identity information of the virtual objects corresponding to red, yellow, and blue …, respectively, where the cross-reference list of identity information may assist the player in quickly identifying the marking positions of other player-controlled virtual objects on the position-marking interface.
For example, as shown in fig. 2, in the second virtual scene 210 (discussion phase), a function control 220 (a "map" control as shown in the figure) and a first virtual object 230, a second virtual object 240 and a third virtual object 250 for preparing to discuss voting are displayed, wherein the first virtual object 230 is a virtual object in a game logged in to an account of a game client on the terminal device, the second virtual object 240 is a virtual object in a live state except for the first virtual object 230, and the third virtual object 250 is a virtual object in a dead state except for the first virtual object 230. Specifically, in response to a touch operation (e.g., a click operation) with respect to the functionality control 220, a position-marking interface may be displayed in the second virtual scene 210. Specifically, as shown in fig. 3, a flower nursery 311, a library 312, a health care room 313, a dormitory 314, a main hall 315, and the like are displayed in the location mark interface 310, and a character identifier 320 corresponding to a virtual object to which location mark information has been uploaded is provided in the location mark interface 310, where the character identifiers indicate identity information by numbers 1, 2, 3, 4, and 5, where the virtual object No. 1 is located in the library 312, the virtual object No. 2 is located in the health care room 313, the virtual object No. 3 is located in the main hall 315, the virtual object No. 4 is located in the flower nursery 311, the virtual object No. 5 is located in the dormitory 314, and the character identifier occupies a corresponding display area in the location mark interface by a circle. It should be understood that, in the embodiment of the present application, the position marking interface may also be displayed in the first virtual scene.
According to the embodiment of the application, the position mark information of each player in the game is uploaded, and the information uploading process is rapid, so that the position information of each player in the game is transmitted and displayed in an efficient and clear manner, the information statement of the player in the game discussion stage is reduced, the memory burden of the player is effectively relieved, the player is assisted to carry out faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
However, in some cases, the position mark information reported by multiple players may be in the same area, as shown in fig. 4, at this time, the character identifiers 320 are displayed in a certain area on a position mark interface 310 at the same time, so that different character identifiers are mutually blocked, and the player cannot see which specific character identifier is, where each character identifier is recorded with a label of the corresponding player, such as 1, 2, 3 …, and the like.
Based on this, the embodiment of the application can also adjust the display position of each role identifier displayed in the position marking interface, so as to avoid each role identifier being blocked.
That is to say, in an optional embodiment of the present application, the role identifier of each virtual object may be displayed in the position mark interface, where a first distance between the role identifiers of two adjacent virtual objects is greater than a preset distance, that is, a preset gap is provided between the role identifiers of two adjacent virtual objects, so that the role identifiers are dispersed, do not obscure (i.e., do not overlap) or obscure part of the role identifiers occupying a corresponding display area in the position mark interface, but a player can see clearly the identity of the virtual object indicated by each role identifier. The first distance between the character identifiers refers to a distance between center points of the character identifiers, the character identifiers are identifiers occupying a certain area, for example, circular identifiers having a radius, and the shielding area between the character identifiers is adjusted by adjusting the distance between the center points of the character identifiers, for example, a gap is formed between any two adjacent character identifiers, or a certain overlap exists only at the edge between any two adjacent character identifiers.
Here, the character identifier occupies a corresponding display area in the position mark interface, and identity information indicating an identity of the virtual object is displayed in the character identifier. In a preferred embodiment, in the role identifiers displayed in the position mark interface, the fact that the role identifiers of two adjacent virtual objects are not occluded from each other may mean that a preset gap exists between the identity information displayed in the role identifiers of two adjacent virtual objects, and the display areas occupied by the role identifiers of two adjacent virtual objects may not be overlapped or the edges of the display areas may be covered.
Illustratively, the character identifier includes occupied display areas, such as circles shown in fig. 3, the circles represent display areas, 1, 2, 3, etc. represent identity information indicating the identity of a virtual object, the preset gap means that there is a gap between the identity information, i.e., there is a gap between numbers, and there is a place where there may be an overlap between the display areas, such as edges are covered. Here, if the identity information is represented by using a color, it may be defined that an overlap ratio of display areas occupied by the character identifiers of two adjacent virtual objects is smaller than a preset ratio, where the overlap ratio refers to a ratio of the overlap area of the display areas occupied by the character identifiers of the two virtual objects to the respective display areas, the preset ratio may be set to 30% or 40%, and the specific value may be adjusted according to an actual situation.
In specific implementation, the initial display positions of the character identifications are determined according to the position mark information, and the final display positions of the character identifications are determined according to the distance between the initial display positions; and displaying at least one second virtual object and/or the character identification of the first virtual object according to the final display position. Namely: and determining initial display positions of the character identifications according to the position marking information, adjusting the initial display positions according to the distance between the initial display positions to determine final display positions of the character identifications in the position marking interface, and displaying the character identifications of at least one second virtual object and/or the first virtual object according to the final display positions.
Here, the initial display position refers to a display position of a virtual object in a first virtual scene determined according to position mark information reported by a player, and the initial display position is not adjusted by a system, for example, the initial display position is a display position which is not adjusted after the initial display position is position information of a virtual character reported by a first terminal device or a second terminal device received by a game server; the final display position refers to a display position after the initial display positions are adjusted according to the distance between the initial display positions, and the final display position is adjusted by the system. For example, the game server adjusts each initial display position according to the distance between the initial display positions to determine a final display position, and sends the final display position to each terminal device, and the terminal device controls and displays the character identifier according to the final display position, so that a preset gap is formed between the character identifiers of two virtual objects, and mutual shielding between the character identifiers is avoided.
In actual adjustment, in response to detecting that the distance between any two adjacent initial display positions is smaller than the preset gap, at least one initial display position in any two adjacent initial display positions is adjusted to determine the final display position of the character identifier. Here, the time for reporting the position mark information by each virtual object may be the same, that is, reporting is performed at the same time, and the time for reporting the position mark information by each virtual object may also be different, that is, there is a time difference between the time for reporting the position mark information by each virtual object, at this time, it may be detected whether a distance between an initial display position corresponding to the currently reported position mark information and a display position of each existing character identifier in the position mark interface is smaller than a preset gap, and if the distance is smaller than the preset gap, the initial display position corresponding to the currently reported position mark information may be adjusted, or the display position of each existing character identifier may be adjusted.
Here, when it is detected that the distance between any two adjacent initial display positions is smaller than the preset gap, the initial display position of any one of the character identifiers may be adjusted, or the initial display positions of two character identifiers may be adjusted at the same time, so as to determine the final display position of the character identifier.
Specifically, if the distance between any two adjacent initial display positions is not less than the preset gap, the initial display position of the character identifier does not need to be adjusted, that is, the corresponding character identifier is directly displayed at the initial display position.
When the position adjusting mode is applied to the reasoning game, the method is realized by the following steps: after receiving the position mark information uploaded by the player through the terminal equipment, the game server determines the minimum distance between the character identifications corresponding to any two position mark information (under the minimum distance, excessive overlapping cannot be caused); and replanning the position of each character identifier in the virtual map according to the calculated minimum distance, and displaying the position in the virtual map. It should be noted that a virtual scene corresponding to a scene of a case is displayed on the virtual map, and it is not limited that the position reported by other people can be seen only after the player reports the position by himself, and the position reported by other people can also be seen when the player does not report the position by himself.
The following description will take the reporting of the location marker information by the first virtual object as an example. In addition, since the way in which the second virtual object reports the position mark information is the same as the way in which the first virtual object reports the position mark information, the way in which the second virtual object reports the position mark information is not described again.
The position mark information reported by the first virtual object is determined by the following method:
displaying a position reporting prompt identifier at a map position corresponding to the actual position of the virtual map and the first virtual object in the virtual scene; and generating position marking information of the first virtual object determined according to the position reporting prompt identifier in response to the position reporting trigger operation aiming at the virtual map. Here, the location mark information includes a map location of the location reporting prompt identifier in the virtual map.
Here, the location reporting prompt identifier is used to prompt the player of the current location in the virtual scene, for example, to prompt the real map location of the virtual object in the virtual map. The location reporting triggering operation refers to an operation for reporting location marker information, and the triggering operation may be a click operation or a slide operation.
Illustratively, a position reporting control is displayed on a virtual map, in response to a click or slide operation on the position reporting control, a position reporting prompt identifier is popped up at a position on the virtual map corresponding to a real map position where a virtual object is currently located in the virtual map, the position reporting prompt identifier is used to indicate position information of the virtual object on a first virtual scene, in response to a position confirmation operation, a map position corresponding to the position reporting prompt identifier can be reported to a game server as position mark information, and then, on the virtual map, a character identifier corresponding to the virtual object is generated at the map position corresponding to the position mark information, so that other players know the position information of the player reporting the position mark information in the first virtual scene through the character identifier. In addition, if the player does not want other players to know the position of the player in the first virtual scene, the position mark information may not be reported or the position information of the first virtual object indicated by the position mark information on the first virtual scene may not be changed.
Specifically, in response to a trigger operation for a position reporting control displayed on the virtual map, an actual position of the first virtual object in the virtual scene is determined as position mark information of the first virtual object.
The position reporting control refers to a control for reporting position mark information of the virtual object to the game server, where the position mark information may be determined based on the position reporting prompt identifier, and the position reporting control may be, for example, a button. For example, in response to a trigger operation for the position reporting control, a position reporting prompt identifier is popped up at a position on the virtual map corresponding to a real map position where the virtual object is currently located in the virtual map, and in response to a position confirmation operation, an actual position of the virtual object in the virtual scene is reported as position mark information, so that the actual position of the virtual object is displayed on the virtual map.
In addition to this, it is also possible to determine a position selected on the virtual map as the position mark information of the first virtual object in response to a position selection operation performed on the virtual map.
The position selection operation refers to an operation of reselecting a position of the first virtual object in the virtual scene, where the reselected position is no longer a real position of the first virtual object in the virtual scene, and the position selection operation may be an operation of first selecting a position reporting prompt identifier, and then dragging the position reporting prompt identifier on the graphical user interface, where a map position to which the position reporting prompt identifier is dragged may be determined as position mark information of the virtual object. Or, the position selecting operation may be a click operation on a target map position on the virtual map, and the position reporting prompt identifier is displayed at the target map position, so that the target map position is determined as the position mark information of the virtual object.
Under the condition, the position mark information reported by the player is not the real position of the player in the virtual environment but false position information, the displayed position reporting prompt mark can be dragged, the position reporting prompt mark is dragged to another position in the virtual map and then reported, and the reported position mark information is the position information dragged by the player.
In this embodiment, during the course of the discussion phase, the player uploads the position of the virtual object that he or she controls in the action phase to the game server, for example, the position when a certain player actively initiates a vote, for example, the position of the player when the player finds a corpse, for example, the position of the corpse, synchronizes the uploaded position to all players, and the player can be helped to select a voting object according to the position of the player uploaded to the game server.
Before each player uploads its own position marker information to the server, the player's terminal device first shows the player's position at the time of initiating the vote in a virtual map of the game. Then, the player can drag the position reporting prompt mark indicating the position information of the player to change the position of the player for interfering other players. After the dragging position reports the prompt identifier, the player can click a confirmation button or an upload button with a similar function to share the dragged position mark information to all other players, and all other players can see the position shared by the player in a virtual map of the game of the players, which is referred to as the dragged position.
Through the mode, each player can see the position shared by all the players, and the position may be dragged or not dragged; in the game process, after all players do not necessarily share the position, the position information is displayed in the game interface of each player at the same time, and the position of the player can be shared to other people firstly when the player shares the position firstly; meanwhile, other players may also choose not to share their own position.
For example, as shown in fig. 5, in the position marking interface 310, there is no identity information block between the final display positions of the character identifiers 320, so that the player can clearly view the positions of other players on the virtual map, and the player is assisted in making faster reasoning judgment and discussion in the game discussion phase.
The adjustment method for adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface provided by the embodiment of the application mainly includes the following two methods:
the first adjustment mode is as follows: and constructing a two-dimensional coordinate grid corresponding to the virtual map, and finely adjusting the position of each initial display position based on the constructed two-dimensional coordinate grid. Here, there is a correspondence relationship between the coordinate position in the two-dimensional coordinate grid and the map position in the virtual map.
In one example, a first target grid intersection which is nearest to a map position corresponding to the initial display position and is unoccupied in the two-dimensional coordinate grid is determined, and a map position corresponding to the coordinate position of the first target grid intersection in the virtual map is determined according to the corresponding relation to be used as a final display position.
In another example, a second target mesh intersection point which is nearest to the map position corresponding to the initial display position and is unoccupied in a direction corresponding to the map position in the virtual scene where the virtual object corresponding to the initial display position is located in the two-dimensional coordinate mesh is determined, and a map position corresponding to the coordinate position of the second target mesh intersection point in the virtual map is determined as the final display position according to the correspondence. That is, the adjustment direction for the initial display position is moved toward the direction in which the actual position of the virtual object in the virtual scene corresponds to the map position.
Here, the two-dimensional coordinate grid refers to a rectangular plane coordinate system formed by two axes perpendicular to each other on the same plane and having a common origin, and the rectangular plane coordinate system is proportional to the virtual map. And the map position in the virtual map corresponds to the coordinate position of the plane rectangular coordinate system. Specifically, when a two-dimensional coordinate grid is constructed, the system can be directly constructed on a virtual map according to the corresponding relation with the virtual map; or may be constructed according to the correspondence between the actual virtual scenes corresponding to the virtual map.
During actual adjustment, the mesh intersection points which are closest to the coordinate position corresponding to the initial display position in the two-dimensional coordinate mesh and are not occupied can be determined as the adjusted final display position; the mesh intersection points that are closest to the coordinate position indicated by the initial display position and that are unoccupied in the direction corresponding to the actual position of the at least one virtual object in the virtual environment may also be determined as the adjusted final display position.
Specifically, after each player uploads the position information, the system can mark the uploaded position information and judge which grid intersections are occupied, so that the system can avoid the occupied grid intersections when adjusting the position of the character identifier, thereby saving system resources and reducing program processing time.
The second adjustment mode is as follows: displaying the role identification at the final display position corresponding to each role identification in the position marking interface, and connecting the role identification of each virtual object with the initial display position corresponding to each virtual object.
The specific implementation mode is as follows: adjusting the initial display position of the character identifier in the position marking interface, determining the position of the adjusted character identifier in the position marking interface as a final display position, connecting the initial display position before the character identifier is adjusted with the adjusted final display position, displaying the final display position and the connecting line in the position marking interface, wherein one end of the connecting line, which is far away from the final display position, is the initial display position of the character identifier, but the initial display position is not displayed, so that the initial display position of the character identifier in the position marking interface can be appointed through the final display position of the character identifier and the connecting line.
Preferably, when adjusting the initial display position of the character identifier in the position mark interface, the following conditions may be defined: firstly, connecting lines between a final display position and an initial display position corresponding to any role identification are not crossed; and secondly, final display positions corresponding to any character identifiers are not overlapped.
Here, it should be understood that when each initial display position is adjusted based on the constructed two-dimensional coordinate grid, the adjustment for the position is fine, and in the second adjustment manner, the adjustment for the distance between the adjusted final display position and the corresponding initial display position can be made larger, so as to avoid the intersection of the connecting lines and the overlap between the final display positions.
It should be understood that, in the above-mentioned exemplary adjustment method, the initial display position is adjusted so that two adjacent character marks are not blocked from each other, but the present application is not limited thereto, and the display method for the character marks may be changed so that two adjacent character marks are not blocked from each other, or the two methods may be combined, that is, the display method for the character marks is changed so that two adjacent character marks are not blocked from each other while the initial display position is adjusted.
In the embodiment of the present application, the display mode for the role identifier may be changed in the position mark interface in any one of the following modes:
the first mode is as follows: and responding to the zooming-in display skill triggering operation aiming at the position marking interface, zooming in and displaying the target area in the virtual map and/or the character identification of each virtual object in the target area.
Here, the enlargement display skill may enlarge only the target area in the virtual map, may enlarge only the character identifier of the virtual object, or may enlarge both the target area in the virtual map and the character identifier of the virtual object.
The second mode is as follows: the character of each virtual object is reduced in size for display in the location-tagging interface.
Here, in response to the reduction display skill triggering operation for the character identifier of each virtual object, the character identifier of each virtual object is reduced and displayed so that the player can clearly see the identity information indicated by the character identifier in the position mark interface.
The third mode is as follows: and changing the expression form of the identity information which is displayed in the character identification and used for indicating the identity of the virtual object.
Here, the virtual object identity may be indicated with preset characters, wherein the preset characters may be numbers 1, 2, 3 …, letters a, b, c …; the virtual object identity may also be indicated with a predetermined fill color, wherein an identity information cross-list may be added when the virtual object identity is indicated with the fill color.
In a preferred embodiment, the character identification is also displayed in the location-marking interface by: displaying a display strategy control on a position marking interface; responding to the trigger operation aiming at the display strategy control, and determining a display mode aiming at the role identification; and displaying the character identification of each virtual object in the determined display mode in the position marking interface.
Here, the display policy control is configured to display the display modes of the role identifier, and the display modes of the role identifier displayed in the display policy control may be opened by responding to a trigger operation for the display policy control, so that one mode may be arbitrarily selected from the display modes.
Here, the display manner may include at least one of:
one display mode is as follows: and determining the coverage relation of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object.
Here, the role identifiers are displayed according to the chronological order of reporting the position mark information, wherein the role identifier of the first reported virtual object is displayed first, the role identifier of the second reported virtual object is displayed later, and the display area occupied by the role identifier of the second reported virtual object is displayed below the display area occupied by the role identifier of the first reported virtual object.
The other display mode is as follows: and determining the coverage relation of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
Here, the identity type may refer to a role type corresponding to the virtual object model, and the display priority of the role type corresponding to the virtual object model is predefined in advance, so as to display each role identifier according to the display priority. Therefore, when displaying, the role identifiers of the virtual objects can be displayed according to the predefined display priority, so that the coverage relation of the edges of the display areas occupied by the role identifiers of the two adjacent virtual objects is determined.
According to the virtual map display method provided by the embodiment of the application, the position mark information of each player in the game is uploaded, and the information uploading process is quick, so that the position information of each player in the game is transmitted and displayed in an efficient and clear manner; meanwhile, the automatic adjustment of the position marking information can be realized, so that the character identifications displayed in the virtual map are prevented from being shielded, the information statement of a player in a game discussion stage is reduced, the memory burden of the player is effectively relieved, the positions of other players on the virtual map can be clearly checked by the player, the player is assisted to carry out faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
Wherein the functions occurring during the action phase typically have the following first to eighth functions, and the first, second and seventh functions during the discussion phase.
First, the present embodiment provides a display function of a virtual map. Responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; and responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from a first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object.
In the present embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects may move, may also perform game tasks or perform other interactive operations. The user issues a moving operation for the first virtual object to control the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position in the relative center of the range of the first virtual scene displayed in the graphical user interface. The virtual camera in the first virtual scene moves along with the movement of the first virtual object, and accordingly the range of the first virtual scene displayed in the graphical user interface changes correspondingly according to the movement of the first virtual object.
The virtual objects participating in the local game are in the same first virtual scene, so that in the moving process of the first virtual object, if the first virtual object is closer to other virtual objects, other virtual objects may enter the range of the first virtual scene displayed in the graphical user interface, and the virtual objects are virtual characters controlled by other players. As shown in fig. 6, two second virtual objects nearby are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the player controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in multiple survival states, and the second virtual objects in multiple survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily discovered by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, the target virtual object is subjected to specified operation, and then the target virtual object enters a target state.
And displaying the second virtual scene in the graphical user interface after the preset trigger event is triggered. For example, the trigger event may be a specific trigger operation, and any virtual object in a live state may perform the trigger operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so that the virtual scene is switched from the first virtual scene to the second virtual scene, and all virtual objects in the local game are moved from the first virtual scene to the second virtual scene. The second virtual scene includes at least one character model of the second virtual object or a character icon of the second virtual object in addition to the first virtual object or the character icon of the first virtual object, where the character icon may be an avatar, a name, etc. of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right to speak, discuss and vote, but the target virtual object enters the target state, so that at least part of the interaction modes configured in the second virtual scene by the target virtual object are in the state of being limited to be used; the interaction mode can comprise speech discussion interaction, voting interaction and the like; the state of being restricted from use may be that a certain interactive mode may not be used, or that a certain interactive mode may not be used within a certain period of time, or that the number of times of a certain interactive mode is restricted to a specified number of times.
As shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in a live state are included, including the first virtual object, the first virtual object may send discussion information through the right click input control and the voice translation control, the discussion information sent by the virtual object may be displayed on the discussion information panel, and the discussion information may include who initiated the discussion, who was attacked, the location of the attacked virtual object, the location of each virtual object when the discussion was initiated, and the like.
The user may vote for a virtual object by clicking a virtual object in the second virtual scene, and displaying a voting button for the virtual object in the vicinity of the virtual object. Or clicking a vote abandoning button to abandon the voting authority of this time.
And responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object. The specific implementation of this process can be seen in the above embodiments.
Second, the present embodiment provides an information display function of a virtual object. Displaying a first virtual scene and a first virtual object located in the first virtual scene in a graphical user interface; and responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
In the present embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects may move, may also perform game tasks or perform other interactive operations. The user issues a moving operation for the first virtual object to control the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position in the relative center of the range of the first virtual scene displayed in the graphical user interface. The virtual camera in the first virtual scene moves along with the movement of the first virtual object, and accordingly the range of the first virtual scene displayed in the graphical user interface changes correspondingly according to the movement of the first virtual object.
The virtual objects participating in the local game are in the same first virtual scene, so that in the moving process of the first virtual object, if the first virtual object is closer to other virtual objects, other virtual objects may enter the range of the first virtual scene displayed in the graphical user interface, and the virtual objects are characters controlled by other players or virtual characters controlled by non-players. As shown in fig. 6, two second virtual objects nearby are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from at least one second virtual object in the survival state and/or at least one third virtual object in the death state, and the at least one second virtual object in the survival state can be understood as the virtual object in the survival state except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, the behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and not easily discovered by other virtual objects during attack as the target virtual object, or select a virtual object with suspicious identity information inferred based on the position, the behavior, and the like. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, or the target virtual object is selected to execute the specified operation on the target virtual object, and then the target virtual object enters the target state.
For example, in response to the operation of adding the remarks, remark prompt information of at least one second virtual object can be displayed in the graphical user interface; and adding remark information to a target virtual object in the displayed at least one second virtual object in response to a trigger operation for the remark prompt information. At this time, the remark information may be displayed on the periphery side of the target virtual object in the first virtual scene, that is, when the first virtual object moves in the first virtual scene according to the moving operation and controls the range of the first virtual scene displayed in the graphical user interface to change correspondingly according to the movement of the first virtual object, if the target virtual object appears within the preset range of the first virtual object, the player may see the target virtual object and the remark information of the target virtual object through the first virtual scene presented in the graphical user interface.
And displaying the second virtual scene in the graphical user interface after the preset trigger event is triggered. For example, the trigger event may be a specific trigger operation, and any virtual object in a live state may perform the trigger operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so that the virtual scene is switched from the first virtual scene to the second virtual scene, and all virtual objects in the local game are moved from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or a character model and a character icon of the second virtual object in addition to the first virtual object or the character model and the object icon of the first virtual object, where the character icon may be an avatar, a name, etc. of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right to speak discussion and vote, and if the target virtual object enters the target state (as added with remark information), the current player can see the target virtual object and the remark information of the target virtual object through the second virtual scene presented in the graphical user interface. In addition, an interactive mode is also configured in the second virtual scene, wherein the interactive mode may include a speech discussion interaction, a voting interaction, a remark interaction, and the like; the state of being restricted from using may be that a certain interactive mode may not be used, or that a certain interactive mode may not be used within a certain period of time, or that the number of times of a certain interactive mode is restricted to a specified number of times. Illustratively, a virtual character in a death state is restricted from using voting interactions and for a virtual character in a death state and with a known identity, a remark interaction is restricted.
As shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in a live state are included, including the first virtual object, the first virtual object may send discussion information through the right click input control and the voice translation control, the discussion information sent by the virtual object may be displayed on the discussion information panel, and the discussion information may include who initiated the discussion, who was attacked, the location of the attacked virtual object, the location of each virtual object when the discussion was initiated, and the like.
The user may vote for a virtual object by clicking a virtual object in the second virtual scene, and displaying a voting button for the virtual object in the vicinity of the virtual object. Or clicking a vote abandoning button to abandon the voting authority of this time. In addition, while the voting button is displayed, a remark control can be displayed, so that remark information can be added to the clicked virtual object based on touch operation of the remark control.
In addition, a remark list can be displayed in the second virtual scene, and remark prompt information is displayed in the remark list, so that the remark information is added to the displayed target virtual object in response to a trigger operation for the remark prompt information. The specific implementation of this process can be seen in the above embodiments.
Thirdly, the embodiment provides a control function of a game process, in the action phase, displaying at least a part of the first virtual scene and the first virtual object in the first virtual scene in the action phase on the graphical user interface; acquiring skill configuration parameters of a first virtual object to determine additional skills of the first virtual object added on the basis of role default skills; the default skill is a skill assigned according to the identity attribute of the first virtual object; when the completion progress of the virtual task in the game stage is determined to reach a progress threshold, controlling the first virtual object to unlock the additional skill, and providing an additional skill control for triggering the additional skill on the basis of providing a default skill control for triggering the default skill in a graphical user interface; responding to a preset trigger event, and controlling a graphical user interface to display a second virtual scene corresponding to the discussion stage; the second virtual scene includes at least one of: the second virtual object, the role icon of the second virtual object, the first virtual object and the role icon of the first virtual object; the discussion phase is configured to determine a game state of the at least one second virtual object or the first virtual object based on the discussion phase result. Specific implementations of this process can be seen in the following examples.
In an embodiment of the present application, a description is made from the perspective of a first virtual object having a first character attribute. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which the first virtual object can move, can also play virtual tasks or perform other interactive operations. The user issues a moving operation for the target virtual object to control the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position in the relative center of the range of the first virtual scene displayed in the graphical user interface. The virtual camera in the first virtual scene moves along with the movement of the first virtual object, and accordingly the range of the first virtual scene displayed in the graphical user interface changes correspondingly according to the movement of the first virtual object.
When the user controls the first virtual object to move in the first virtual scene, determining additional skills, which are newly added to the first virtual object on the basis of the character default skills, according to the skill parameters of the first virtual object, wherein the additional skills may include at least one of the following: the method comprises the steps of identity-to-game skill, identity verification skill, guidance skill and task doubling skill, simultaneously determining the progress of virtual tasks which are completed together by a plurality of other virtual objects which have the same role attribute (first role attribute) as a first virtual object in the current game-to-game stage, displaying according to a displayed progress bar, controlling the first virtual object to unlock additional skill when the completion progress of the virtual tasks in the game-to-game stage reaches a progress threshold, playing games by the first virtual object by using the additional skill, for example, determining virtual objects which are in a target state (such as death and the like) and are within a preset distance threshold from the first virtual object in a first virtual scene in the motion stage by using the guidance skill, controlling the first virtual object to move to the position of the virtual object in the target state, and immediately initiating discussion.
And displaying the second virtual scene in the graphical user interface after the preset trigger event is triggered. For example, the trigger event may be a specific trigger operation, and any virtual object in a live state may perform the trigger operation, for example, as shown in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so that the virtual scene is switched from the first virtual scene to the second virtual scene, and all virtual objects in the local game are moved from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object and the object icon of the first virtual object, where the object icon may be a head portrait, a name, etc. of the virtual object.
In the second virtual scenario, the virtual objects in the survival state have the right to speak the discussion and vote, as shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in the survival state are included, including the first virtual object, the first virtual object can send discussion information through the right click input space and the voice translation control, the discussion information sent by the virtual object can be displayed on the discussion information panel, and the discussion information can include the discussion of who initiated, who attacked, the position of the attacked virtual object, the position of each virtual object when initiating the discussion, and the like.
The user can click a certain virtual object in the second virtual scene, and the voting button for the virtual object can be displayed near the virtual object, so as to vote for the virtual object, before voting, the user can control the first virtual object to use the corresponding unlocked additional skills to check the virtual object in question, for example, the first virtual object can use the authentication skills to check the identity of the virtual object in question, and according to the checking result, whether to vote for the virtual object is determined, so as to improve the accuracy of voting, of course, the user can click the vote abandoning button, and the voting authority is abandoned at this time.
Fourth, the present embodiment provides another virtual map display function. Responding to the moving operation, controlling the virtual role to move in the virtual scene, and displaying the virtual scene to which the virtual role is currently moved in the graphical user interface;
in the present embodiment, description is made from the perspective of a virtual object controlled by a player. A virtual scene is provided in the graphical user interface, as shown in fig. 6, in which a virtual character (e.g., the first virtual scene shown in fig. 6) controlled by a player (e.g., the first virtual character and/or the second virtual character shown in fig. 6) can move in the virtual scene, and can also perform game tasks or perform other interactive operations. In response to a movement operation issued by a player, a virtual object is controlled to move in a virtual scene, and in most cases, the virtual object is located at a position relatively at the center of a range of the virtual scene displayed in the graphical user interface. The virtual camera in the virtual scene moves along with the movement of the virtual object, so that the virtual scene displayed in the graphical user interface correspondingly changes along with the movement of the virtual object, and the virtual scene to which the virtual character currently moves is displayed in the graphical user interface.
The virtual objects participating in the local game are in the same virtual scene, so that in the moving process of the virtual objects, if the virtual objects are closer to other virtual objects, other virtual objects may enter the range of the virtual scene displayed in the graphical user interface, and the virtual objects are characters controlled by other players. As shown in fig. 7, a plurality of virtual objects are displayed in the virtual scene range. In addition, a movement control for controlling the movement of the virtual object, a plurality of attack controls, and a discussion control, which can be used to control the virtual object to enter the second virtual scene as shown in fig. 7, are displayed in the graphical user interface.
And responding to the map display operation sent by the user, and overlaying and displaying the first virtual map on the virtual scene displayed on the graphical user interface. For example, a player performs a touch operation with respect to a scene thumbnail (a scene map as shown in fig. 6), and displays a first virtual map superimposed on a virtual scene; for another example, in response to a control operation of controlling the virtual character to perform the second specific action, a first virtual map is displayed superimposed over the virtual scene; here, the first virtual map includes at least a position where the first virtual character is currently located, positions of the respective first virtual areas in the virtual scene, positions of the connected areas, and the like. When the map switching condition is triggered, switching a first virtual map which is displayed in an overlaying mode on a virtual scene in the graphical user interface into a second virtual map corresponding to the virtual scene, wherein the transparency of at least part of a map area of the second virtual map is higher than that of the map area corresponding to the first virtual map, so that the shielding degree of the switched virtual map on information in the virtual scene is lower than that before switching. For example, the map switching condition may be a specific trigger operation, which may be performed by the virtual object in the alive state, for example, after the control operation for controlling the virtual object to perform the first specific action, the first virtual map displayed in an overlaid manner on the virtual scene is switched to the second virtual map corresponding to the virtual scene; for another example, by triggering the map switching key, the first virtual map displayed superimposed on the virtual scene may be switched to the second virtual map corresponding to the virtual scene.
When the map switching condition is triggered, the first virtual map can be switched to a second virtual map through a specific switching mode, for example, the first virtual map displayed in a superposition manner on a virtual scene is replaced by the second virtual map corresponding to the virtual scene; or adjusting the first virtual map to an invisible state in the current virtual scene according to the transparency first change threshold, and replacing the first virtual map which is superposed and displayed on the virtual scene with a second virtual map corresponding to the virtual scene; or clearing the first virtual map superposed and displayed on the virtual scene, and superposing and displaying a second virtual map on the virtual scene according to a second change threshold of the transparency; and then, or, according to a third change threshold of the transparency, the transparency of the first virtual map is adjusted, and simultaneously, according to a fourth change threshold of the transparency, a second virtual map is displayed in a virtual scene in an overlapping mode until the first virtual map is in an invisible state in the current virtual scene.
Fifth, the present embodiment provides a target attack function in a game. Responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; the method comprises the steps of controlling a temporary virtual object to move from an initial position to a position where a target virtual object is located in a first virtual scene and executing a specified operation on the target virtual object so as to enable the target virtual object to enter a target state, wherein the temporary virtual object is a virtual object controlled by the first virtual object with a target identity, the target identity is an identity attribute distributed at the beginning of game matching, the target virtual object is a virtual object determined from second virtual objects in a plurality of survival states, the target state is a state that at least part of interaction modes configured in the second virtual scene by the target virtual object are limited to be used, the second virtual scene is a virtual scene displayed in a graphical user interface in response to a preset trigger event, and the second virtual scene comprises at least one second virtual object or an object icon of the second virtual object.
In the present embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects may move, may also perform game tasks or perform other interactive operations. The user issues a moving operation for the first virtual object to control the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position in the relative center of the range of the first virtual scene displayed in the graphical user interface. The virtual camera in the first virtual scene moves along with the movement of the first virtual object, and accordingly the range of the first virtual scene displayed in the graphical user interface changes correspondingly according to the movement of the first virtual object.
The virtual objects participating in the local game are in the same first virtual scene, so that in the moving process of the first virtual object, if the first virtual object is closer to other virtual objects, other virtual objects may enter the range of the first virtual scene displayed in the graphical user interface, and the virtual objects are characters controlled by other players. As shown in fig. 6, two second virtual objects nearby are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
The temporary virtual object is a virtual object controlled by a first virtual object with a target identity, the target identity is an identity attribute allocated at the beginning of game pairing, the target virtual object is a virtual object determined from a plurality of second virtual objects in a survival state, the target state is a state in which at least part of interaction modes configured in a second virtual scene by the target virtual object are limited to be used, the second virtual scene is a virtual scene displayed in a graphical user interface in response to a preset trigger event, and the second virtual scene comprises at least one second virtual object or a role icon of the second virtual object.
In an initial state, the temporary virtual object is not controlled by the user, but under certain specific conditions, the first virtual object with the target identity or the user corresponding to the first virtual object with the target identity has the right to control the temporary virtual object. Specifically, the temporary virtual object may be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, and the target virtual object may be subjected to a specified operation. The initial position may be a position where the temporary virtual object is not controlled, and the specifying operation may be an attack operation, and after the specifying operation is performed on the target virtual object, a specific influence is exerted on the target virtual object, that is, the target virtual object is brought into the target state.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in multiple survival states, and the second virtual objects in multiple survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily discovered by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, the target virtual object is subjected to specified operation, and then the target virtual object enters a target state.
And displaying the second virtual scene in the graphical user interface after the preset trigger event is triggered. For example, the trigger event may be a specific trigger operation, and any virtual object in a live state may perform the trigger operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so that the virtual scene is switched from the first virtual scene to the second virtual scene, and all virtual objects in the local game are moved from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object or the object icon of the first virtual object, where the object icon may be a head portrait, a name, etc. of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right to speak, discuss and vote, but the target virtual object enters the target state, so that at least part of the interaction modes configured in the second virtual scene by the target virtual object are in the state of being limited to be used; the interaction mode can comprise speech discussion interaction, voting interaction and the like; the state of being restricted from use may be that a certain interactive mode may not be used, or that a certain interactive mode may not be used within a certain period of time, or that the number of times of a certain interactive mode is restricted to a specified number of times.
As shown in fig. 7, in the second virtual scene, a plurality of virtual objects in a live state are included, including the first virtual object, the first virtual object can send discussion information through the right click input space and the voice translation control, the discussion information sent by the virtual object can be displayed on the discussion information panel, and the discussion information can include who initiated the discussion, who was attacked, the position of the attacked virtual object, the position of each virtual object when the discussion was initiated, and the like.
The user may vote for a virtual object by clicking a virtual object in the second virtual scene, and displaying a voting button for the virtual object in the vicinity of the virtual object. Or clicking a vote abandoning button to abandon the voting authority of this time.
In the target attack method in the game, in the first virtual scene, the first virtual object with the target identity can control the temporary virtual object to execute the specified operation on the target virtual object, and the first virtual object does not need to be controlled to directly execute the specified operation on the target virtual object.
Sixthly, the embodiment provides an interactive data processing function in a game, which controls a first virtual object to move in a virtual scene in response to a touch operation for a movement control area, and controls a virtual scene range displayed on a graphical user interface to change according to the movement of the first virtual object; determining a response area of a target virtual object moved to a virtual scene, wherein the target virtual object is a virtual object which is arranged in the virtual scene and can interact with the virtual object; and responding to a control instruction triggered by touch operation, controlling to switch the display state of the first virtual object into a stealth state, and displaying a mark for indicating the first virtual object in the area of the target virtual object.
The movement control area is used for controlling the movement of the virtual object in the virtual scene, and the movement control area can be a virtual rocker, and the movement direction of the virtual object and the movement speed of the virtual object can be controlled through the virtual rocker.
The virtual scene displayed in the graphical user interface is mainly obtained by shooting an image of a virtual scene range corresponding to the position of the virtual object by the virtual camera, the virtual camera can be generally set to move along with the virtual object in the moving process of the virtual object, and at the moment, the virtual scene range shot by the virtual camera can also move along with the virtual object.
Some virtual objects with interaction functions can be arranged in the virtual scene, the virtual objects can interact with the virtual objects, and the virtual objects can trigger the interaction when being positioned in the response area of the virtual objects. At least one virtual object with an interactive function can be included in the virtual scene, and the target virtual object is any one of the at least one virtual object with an interactive function.
The range of the response area of the virtual object may be preset, for example, the range of the response area may be set according to the size of the virtual object, the range of the response area may also be set according to the type of the virtual object, and the range may be specifically set according to actual needs. For example, the range of the response area of the virtual object to the carrier class may be set to be larger than the area where the virtual object is located, and the range of the response area of the virtual object to the mischief class item may be set to be equal to the area where the virtual object is located.
The control instruction triggered by the touch operation may be a specific operation for a designated area or a specific operation for a designated object, for example, the control instruction may be triggered by a double-click operation for a target virtual object, for example, an interaction control may be provided in the graphical user interface, and the control instruction may be triggered by a click operation for the interaction control. The interactive control may be provided after determining that the first virtual object moves to the response region of the target virtual object in the virtual scene. Based on this, the method may further comprise: controlling a graphical user interface to display an interactive control of a target virtual object; the control instruction triggered by the touch operation comprises a control instruction triggered by a touch interaction control.
By the embodiment of the invention, the display state of the virtual object can be controlled to be converted into the invisible display after the player triggers the interaction with the virtual object, the switching of the display state and the operation switching do not influence the game process, the interaction with the player is increased, the interestingness is improved, and the user experience is improved.
In some embodiments, the target virtual object may be a virtual vehicle, and the virtual vehicle may be preset with a preset threshold value, where the preset threshold value is used to indicate a maximum carrying number of the virtual vehicle, that is, a maximum number of virtual objects hidden on the virtual vehicle. Based on this, when it is determined that the virtual vehicle is fully loaded, a subsequent player who performs a stealth switch may be indicated as a stealth failure.
In some embodiments, in inference-based games, two segments can be included, which can be divided into action segments and voting segments. All the virtual objects in the live state (players in the game) in the link can act, such as tasks can be done, and the game can be confused. The link in which players can aggregate to discuss and vote for reasoning results, for example, reasoning out the identity of each virtual object, wherein tasks corresponding to the identities of different virtual objects may be different. In this type of game, skills may also be released in the area of the target virtual object to perform tasks, or to shuffle, etc. Based thereon, after determining that the first virtual object moves to the response region of the target virtual object in the virtual scene, the method may further comprise: responding to a skill release instruction triggered by touch operation, and taking at least one virtual object hidden in the area of the target virtual object as a candidate virtual object; and randomly determining an acting object as a skill release instruction in at least one alternative virtual object.
The virtual object of the skill release instruction triggered by the touch operation may be a stealth role or a non-stealth virtual object.
Seventh, the present embodiment provides a scene recording function in a game. Displaying a game interface on the graphical user interface, wherein the game interface comprises at least a part of a first virtual scene in a first game task stage and a first virtual object positioned in the first virtual scene; responding to the movement operation aiming at the first virtual object, and controlling the range of the virtual scene displayed in the game interface to change according to the movement operation; responding to a recording instruction triggered at a first game task stage, and acquiring an image in a preset range of a current game interface; storing the image; and displaying the image in response to a viewing instruction triggered in a second game task stage, wherein the second game task stage and the first game task stage are different task stages in the game in which the first virtual object is currently positioned.
In the present embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 8-9, in which virtual objects can move, can also perform game tasks or perform other interactive operations. The user issues a moving operation for the first virtual object to control the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position in the relative center of the range of the first virtual scene displayed in the graphical user interface. The virtual camera in the first virtual scene moves along with the movement of the first virtual object, and accordingly the range of the first virtual scene displayed in the graphical user interface changes correspondingly according to the movement of the first virtual object.
The virtual objects participating in the local game are in the same first virtual scene, so that in the moving process of the first virtual object, if the first virtual object is closer to other virtual objects, other virtual objects may enter the range of the first virtual scene displayed in the graphical user interface, and the virtual objects are characters controlled by other players. As shown in fig. 8 to 9, two second virtual objects nearby are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in multiple survival states, and the second virtual objects in multiple survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily discovered by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, the target virtual object is subjected to specified operation, and then the target virtual object enters a target state.
And displaying the second virtual scene in the graphical user interface after the preset trigger event is triggered. For example, the trigger event may be a specific trigger operation, and any virtual object in a live state may perform the trigger operation, for example, in fig. 8 to 9, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so that the virtual scene is switched from the first virtual scene to the second virtual scene, and all virtual objects in the local game are moved from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object or the object icon of the first virtual object, where the object icon may be a head portrait, a name, etc. of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right to speak, discuss and vote, but the target virtual object enters the target state, so that at least part of the interaction modes configured in the second virtual scene by the target virtual object are in the state of being limited to be used; the interaction mode can comprise speech discussion interaction, voting interaction and the like; the state of being restricted from use may be that a certain interactive mode may not be used, or that a certain interactive mode may not be used within a certain period of time, or that the number of times of a certain interactive mode is restricted to a specified number of times.
As shown in fig. 10, in the second virtual scene, a plurality of virtual objects in a live state are included, including the first virtual object, the first virtual object can send discussion information through the right click input space and the voice translation control, the discussion information sent by the virtual object can be displayed on the discussion information panel, and the discussion information can include who initiated the discussion, who was attacked, the position of the attacked virtual object, the position of each virtual object when the discussion was initiated, and the like.
The user may vote for a virtual object by clicking a virtual object in the second virtual scene, and displaying a voting button for the virtual object in the vicinity of the virtual object. Or clicking a vote abandoning button to abandon the voting authority of this time.
And responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object.
Eighth, the present embodiment provides a game operation function. Providing a graphic user interface through a terminal, wherein the graphic user interface comprises a virtual scene and a virtual object, the virtual scene comprises a plurality of transmission areas, and the plurality of transmission areas comprise a first transmission area and at least one second transmission area with different scene positions corresponding to the first transmission area. Responding to touch operation aiming at the mobile control area, and controlling the virtual object to move in the virtual scene; determining that the virtual object moves to the first transfer area, and displaying a first group of direction controls corresponding to at least one second transfer area in the movement control area; and responding to a trigger instruction aiming at a target direction control in the first group of direction controls, and controlling to change the virtual scene which is displayed in the graphic user interface and comprises the first transmission area into the virtual scene which comprises a second transmission area corresponding to the target direction control.
Responding to touch operation aiming at the mobile control area, and controlling the virtual object to move in the virtual scene; determining that the virtual object moves to the first transfer area, and displaying a first group of direction controls corresponding to at least one second transfer area in the movement control area; and responding to a trigger instruction aiming at a target direction control in the first group of direction controls, and controlling to change the virtual scene range including the first transmission area displayed in the graphical user interface into the virtual scene range including the second transmission area corresponding to the target direction control.
In this embodiment, the graphical user interface includes at least a partial virtual scene and a virtual object, the virtual scene includes a plurality of transmission areas, and the plurality of transmission areas include a first transmission area and at least one second transmission area with a different scene position corresponding to the first transmission area, where the first transmission area may be an entrance area of a hidden area (for example, a tunnel, etc., in this application, the tunnel is taken as an example), and the second transmission area may be an exit area of the hidden area.
The graphical user interface can comprise a mobile control area, wherein the position of the mobile control area on the graphical user interface can be set in a customized manner according to actual requirements, for example, the mobile control area can be set in a player thumb touch area at the lower left, lower right and the like of the graphical user interface.
As shown in fig. 11, a user inputs a touch operation for a movement control area to control a virtual object to move in a virtual scene, and if it is determined that the virtual object moves to a first transfer area, a first set of direction controls (direction control 1 and direction control 2) corresponding to at least one second transfer area are displayed in the movement control area, where the first set of direction controls is used to indicate a direction of a corresponding tunnel exit.
When a user inputs a trigger instruction for a target direction control (direction control 1) in the first group of direction controls, the virtual scene range including the first transmission region displayed in the graphical user interface can be controlled to be changed into a virtual scene range including the second transmission region corresponding to the target direction control, that is, the virtual scene range of the second transmission region corresponding to the direction control 1 is currently displayed in the graphical user interface through the trigger instruction for the target direction control. The specific implementation of this process can be seen in the above embodiments.
Based on the same inventive concept, the embodiment of the present application further provides a virtual map display apparatus corresponding to the virtual map display method, and because the principle of the apparatus in the embodiment of the present application for solving the problem is similar to that of the virtual map display method in the embodiment of the present application, the implementation of the apparatus can refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a virtual map display apparatus according to an embodiment of the present application, as shown in fig. 12, the virtual map display apparatus according to the embodiment of the present application provides a graphical user interface through a terminal device, and the virtual map display apparatus 800 includes:
a first display control module 810 for displaying at least a portion of the virtual scene and the first virtual object on the graphical user interface.
And the movement control module 820 is configured to, in response to a movement operation for the first virtual object, control the first virtual object to move in the first virtual scene, and control a range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The second display control module 830 is configured to, in response to a preset trigger event, control the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, where the second virtual scene includes at least one second virtual object.
The third display control module 840 is configured to, in response to a touch operation for the function control, display a position mark interface in the graphical user interface, and in the position mark interface, display a role identifier of at least one second virtual object and/or the first virtual object according to position mark information reported by the at least one second virtual object and/or the first virtual object.
Preferably, the third display control module 840 is configured to: determining initial display positions of the role identifications according to the position mark information, and determining final display positions of the role identifications according to the distance between the initial display positions;
and displaying the role identification of the at least one second virtual object and/or the first virtual object according to the final display position.
Preferably, a virtual map corresponding to the virtual scene is included in the position marking interface.
Preferably, the position mark information reported by the first virtual object is used for determining by the following method: displaying a position reporting prompt identifier at a map position corresponding to the actual position of the virtual map and the first virtual object in the virtual scene; and responding to the position reporting trigger operation aiming at the virtual map, and generating position marking information of the first virtual object determined according to the position reporting prompt identifier, wherein the position marking information comprises the map position of the position reporting prompt identifier in the virtual map.
Preferably, the third display control module 840 is specifically configured to: responding to the triggering operation of a position reporting control displayed on a virtual map, and determining the actual position of the first virtual object in the virtual scene as the position mark information of the first virtual object; alternatively, in response to a position selection operation performed on the virtual map, the position selected on the virtual map is determined as the position mark information of the first virtual object.
Preferably, the third display control module 840 is further configured to: and constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein the coordinate position in the two-dimensional coordinate grid and the map position in the virtual map have a corresponding relation.
The step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface includes at least one of the following steps: determining a first target grid intersection point which is nearest to the map position corresponding to the initial display position and is not occupied in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection point in the virtual map according to the corresponding relation to be used as a final display position; and determining a second target grid intersection point which is nearest to the map position corresponding to the initial display position and is not occupied in the direction of the actual position of the virtual object corresponding to the initial display position in the virtual scene in the two-dimensional coordinate grid corresponding to the map position, and determining the map position corresponding to the coordinate position of the second target grid intersection point in the virtual map according to the corresponding relation to be used as a final display position.
Preferably, the role identifier occupies a corresponding display area in the position marking interface, and identity information for indicating the identity of the virtual object is displayed in the role identifier; in the role identifiers displayed in the position marking interface, a preset gap is formed between the identity information displayed in the role identifiers of the two adjacent virtual objects, and the display areas occupied by the role identifiers of the two adjacent virtual objects are not overlapped or the edges of the display areas are covered.
Preferably, the third display control module 840 is specifically configured to: and in response to detecting that the distance between any two adjacent initial display positions is smaller than the preset gap, adjusting at least one initial display position of any two adjacent initial display positions to determine the final display position of the character identifier.
Preferably, the third display control module 840 is specifically configured to: displaying the role identification at the final display position corresponding to each role identification in the position marking interface, and connecting the role identification of each virtual object with the initial display position corresponding to each virtual object.
Preferably, the third display control module 840 is configured to display the character identifier in the position mark interface by at least one of: responding to an amplifying display skill triggering operation aiming at the position marking interface, and amplifying and displaying the target area in the virtual map and/or the role identification of each virtual object in the target area; reducing the display size of the role identification of each virtual object in the position marking interface; and changing the expression form of the identity information which is displayed in the character identification and used for indicating the identity of the virtual object.
Preferably, the third display control module 840 is configured to display the character identifier in the location mark interface by: displaying a display strategy control on a position marking interface; responding to the trigger operation aiming at the display strategy control, and determining a display mode aiming at the role identification; and displaying the character identification of each virtual object in the determined display mode in the position marking interface.
Preferably, the display means comprises at least one of: determining the coverage relation of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object; and determining the coverage relation of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
The virtual map display device provided by the embodiment of the application comprises a first display control module, a mobile control module, a second display control module and a third display control module, wherein the first display control module displays at least part of a virtual scene and a first virtual object on a graphical user interface; the movement control module responds to the movement operation of the first virtual object, controls the first virtual object to move in the first virtual scene, and controls the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; the second display control module responds to a preset trigger event and controls the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object; and the third display control module responds to the touch operation aiming at the function control, displays a position mark interface in the graphical user interface, and displays the role identification of at least one second virtual object and/or the first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or the first virtual object.
Therefore, the position mark information of each player in the game is uploaded, and the information uploading process is quick, so that the position information of each player in the game is transmitted in a high-efficiency and clear mode, the information statement of the player in the game discussion stage is reduced, the memory burden of the player is effectively relieved, the player is assisted to carry out quick reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 13, the electronic device 900 includes a processor 910, a memory 920, and a bus 930.
The memory 920 stores machine-readable instructions executable by the processor 910, when the electronic device 900 runs, the processor 910 communicates with the memory 920 through the bus 930, and when the machine-readable instructions are executed by the processor 910, the steps of the virtual map display method in the method embodiment shown in fig. 1 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the virtual map display method in the method embodiment shown in fig. 1 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A virtual map display method is characterized in that a graphical user interface is provided through a terminal device, at least a part of a virtual scene and a first virtual object are displayed on the graphical user interface, and the virtual map display method comprises the following steps:
in response to the movement operation of the first virtual object, controlling the first virtual object to move in a first virtual scene, and controlling a range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and responding to the touch operation aiming at the function control, displaying a position marking interface in the graphical user interface, and displaying the role identification of the at least one second virtual object and/or the first virtual object in the position marking interface according to the position marking information reported by the at least one second virtual object and/or the first virtual object.
2. The method according to claim 1, wherein the step of displaying the character identifier of the at least one second virtual object and/or the first virtual object according to the position mark information reported by the at least one second virtual object and/or the first virtual object comprises:
determining initial display positions of the role identifications according to the position mark information, and determining final display positions of the role identifications according to distances among the initial display positions;
and displaying the role identification of the at least one second virtual object and/or the first virtual object according to the final display position.
3. The virtual map display method of claim 1, wherein a virtual map corresponding to a virtual scene is included in the position marker interface.
4. The virtual map display method of claim 3, wherein the position mark information reported by the first virtual object is determined by:
displaying a position reporting prompt identifier at a map position corresponding to the actual position of the virtual map and the first virtual object in the virtual scene;
and generating position marking information of the first virtual object determined according to the position reporting prompt identifier in response to a position reporting trigger operation aiming at the virtual map, wherein the position marking information comprises the map position of the position reporting prompt identifier in the virtual map.
5. The virtual map display method of claim 4, wherein the step of generating, in response to a location reporting trigger operation for the virtual map, location marker information of the first virtual object determined according to the location reporting prompt identifier includes:
determining the actual position of the first virtual object in the virtual scene as position marking information of the first virtual object in response to the triggering operation of a position reporting control displayed on the virtual map;
alternatively, in response to a location selection operation performed on the virtual map, the selected location on the virtual map is determined as the location marker information of the first virtual object.
6. The virtual map display method according to claim 2, further comprising: constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein a coordinate position in the two-dimensional coordinate grid and a map position in the virtual map have a corresponding relation;
the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position marking interface includes at least one of the following steps:
determining a first target grid intersection which is nearest to the map position corresponding to the initial display position and is not occupied in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection in the virtual map according to the corresponding relation to be used as a final display position; and the combination of (a) and (b),
and determining a second target grid intersection which is nearest to the map position corresponding to the initial display position and is not occupied in the direction corresponding to the map position of the actual position of the virtual object corresponding to the initial display position in the virtual scene in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the second target grid intersection in the virtual map according to the corresponding relation to be used as a final display position.
7. The virtual map display method of claim 2, wherein the character identifier occupies a corresponding display area in the position mark interface, and identity information indicating an identity of a virtual object is displayed in the character identifier;
in the role identifiers displayed in the position marking interface, a preset gap is formed between the identity information displayed in the role identifiers of the two adjacent virtual objects, and the display areas occupied by the role identifiers of the two adjacent virtual objects are not overlapped or the edges of the display areas are covered.
8. The virtual map display method of claim 2, wherein the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface comprises:
in response to detecting that the distance between any two adjacent initial display positions is smaller than a preset gap, adjusting at least one of the two adjacent initial display positions to determine a final display position of the character identifier.
9. The virtual map display method of claim 2, wherein the character identification is displayed in the position-marking interface by:
and displaying the role identification at the final display position corresponding to each role identification in the position marking interface, and connecting the role identification of each virtual object with the initial display position corresponding to each virtual object.
10. The virtual map display method of claim 1, wherein the character identifier is displayed in the position mark interface by at least one of:
in response to a zoom-in display skill triggering operation for the position marking interface, zooming in and displaying a target area in the virtual map and/or a role identification of each virtual object in the target area;
reducing the display size of the character identifier of each virtual object in the position mark interface;
and changing the expression form of the identity information which is displayed in the role identification and used for indicating the identity of the virtual object.
11. The virtual map display method of claim 7, wherein the character identification is displayed in the position-marking interface by:
displaying a display strategy control on the position marking interface;
responding to the trigger operation aiming at the display strategy control, and determining a display mode aiming at the role identification;
displaying the character identification of each virtual object in the position mark interface in the determined display mode.
12. The virtual map display method according to claim 11, wherein the display manner includes at least one of:
determining the coverage relation of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object;
and determining the coverage relation of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
13. A virtual map display apparatus that provides a graphical user interface through a terminal device, the virtual map display apparatus comprising:
a first display control module for displaying at least part of a virtual scene and a first virtual object on the graphical user interface;
the movement control module is used for responding to the movement operation of the first virtual object, controlling the first virtual object to move in a first virtual scene, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
the second display control module is used for responding to a preset trigger event and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and the third display control module is used for responding to touch operation aiming at a function control, displaying a position mark interface in the graphical user interface, and displaying the role identification of the at least one second virtual object and/or the first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or the first virtual object.
14. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the virtual map display method according to any one of claims 1 to 12.
15. A computer-readable storage medium, having stored thereon a computer program for performing, when executed by a processor, the steps of the virtual map display method according to any one of claims 1 to 12.
CN202110420230.3A 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium Active CN113101634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110420230.3A CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110420230.3A CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113101634A true CN113101634A (en) 2021-07-13
CN113101634B CN113101634B (en) 2024-02-02

Family

ID=76718479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110420230.3A Active CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113101634B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499585A (en) * 2021-08-09 2021-10-15 网易(杭州)网络有限公司 In-game interaction method and device, electronic equipment and storage medium
CN113680065A (en) * 2021-08-19 2021-11-23 网易(杭州)网络有限公司 Map processing method and device in game
CN113769383A (en) * 2021-09-14 2021-12-10 网易(杭州)网络有限公司 Control method and device for virtual object in battle game and electronic equipment
CN114253646A (en) * 2021-11-30 2022-03-29 万翼科技有限公司 Digital sand table display and generation method, equipment and storage medium
CN114860148A (en) * 2022-04-19 2022-08-05 北京字跳网络技术有限公司 Interaction method, interaction device, computer equipment and storage medium
CN115738257A (en) * 2022-12-23 2023-03-07 北京畅游时代数码技术有限公司 Game role display method and device, storage medium and equipment
CN116212361A (en) * 2021-12-06 2023-06-06 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
WO2023226569A1 (en) * 2022-05-23 2023-11-30 腾讯科技(深圳)有限公司 Message processing method and apparatus in virtual scenario, and electronic device, computer-readable storage medium and computer program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016064A1 (en) * 2006-07-31 2008-02-07 Camelot Co., Ltd. Game device, object display method in game device, and display program
US20180345148A1 (en) * 2017-06-05 2018-12-06 Nintendo Co., Ltd. Storage medium, game apparatus, game system and game control method
CN109276887A (en) * 2018-09-21 2019-01-29 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of virtual objects
CN111530073A (en) * 2020-05-27 2020-08-14 网易(杭州)网络有限公司 Game map display control method, storage medium and electronic device
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112156455A (en) * 2020-10-14 2021-01-01 网易(杭州)网络有限公司 Game display method and device, electronic equipment and storage medium
CN112619143A (en) * 2020-12-23 2021-04-09 上海米哈游天命科技有限公司 Role identification display method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016064A1 (en) * 2006-07-31 2008-02-07 Camelot Co., Ltd. Game device, object display method in game device, and display program
US20180345148A1 (en) * 2017-06-05 2018-12-06 Nintendo Co., Ltd. Storage medium, game apparatus, game system and game control method
CN109276887A (en) * 2018-09-21 2019-01-29 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of virtual objects
CN111530073A (en) * 2020-05-27 2020-08-14 网易(杭州)网络有限公司 Game map display control method, storage medium and electronic device
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112156455A (en) * 2020-10-14 2021-01-01 网易(杭州)网络有限公司 Game display method and device, electronic equipment and storage medium
CN112619143A (en) * 2020-12-23 2021-04-09 上海米哈游天命科技有限公司 Role identification display method, device, equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499585A (en) * 2021-08-09 2021-10-15 网易(杭州)网络有限公司 In-game interaction method and device, electronic equipment and storage medium
CN113680065A (en) * 2021-08-19 2021-11-23 网易(杭州)网络有限公司 Map processing method and device in game
CN113769383A (en) * 2021-09-14 2021-12-10 网易(杭州)网络有限公司 Control method and device for virtual object in battle game and electronic equipment
CN114253646A (en) * 2021-11-30 2022-03-29 万翼科技有限公司 Digital sand table display and generation method, equipment and storage medium
CN116212361A (en) * 2021-12-06 2023-06-06 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN116212361B (en) * 2021-12-06 2024-04-16 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN114860148A (en) * 2022-04-19 2022-08-05 北京字跳网络技术有限公司 Interaction method, interaction device, computer equipment and storage medium
CN114860148B (en) * 2022-04-19 2024-01-16 北京字跳网络技术有限公司 Interaction method, device, computer equipment and storage medium
WO2023226569A1 (en) * 2022-05-23 2023-11-30 腾讯科技(深圳)有限公司 Message processing method and apparatus in virtual scenario, and electronic device, computer-readable storage medium and computer program product
CN115738257A (en) * 2022-12-23 2023-03-07 北京畅游时代数码技术有限公司 Game role display method and device, storage medium and equipment
CN115738257B (en) * 2022-12-23 2023-12-08 北京畅游时代数码技术有限公司 Game role display method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN113101634B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN113101634B (en) Virtual map display method and device, electronic equipment and storage medium
WO2022151946A1 (en) Virtual character control method and apparatus, and electronic device, computer-readable storage medium and computer program product
US20240091645A1 (en) Skill range indication and adjustment in a virtual scene
CN111185004A (en) Game control display method, electronic device, and storage medium
WO2022222592A9 (en) Method and apparatus for displaying information of virtual object, electronic device, and storage medium
US7843455B2 (en) Interactive animation
CN113101637A (en) Scene recording method, device, equipment and storage medium in game
EP3970819B1 (en) Interface display method and apparatus, and terminal and storage medium
US20220266136A1 (en) Method and apparatus for state switching in virtual scene, device, medium, and program product
WO2022068418A1 (en) Method and apparatus for displaying information in virtual scene, and device and computer-readable storage medium
CN113101644A (en) Game process control method and device, electronic equipment and storage medium
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
US20220266139A1 (en) Information processing method and apparatus in virtual scene, device, medium, and program product
CN113082718A (en) Game operation method, device, terminal and storage medium
CN113262481A (en) Interaction method, device, equipment and storage medium in game
CN112691366B (en) Virtual prop display method, device, equipment and medium
KR20220139970A (en) Data processing method, device, storage medium, and program product in a virtual scene
CN113101635A (en) Virtual map display method and device, electronic equipment and readable storage medium
CN113101639A (en) Target attack method and device in game and electronic equipment
CN114377396A (en) Game data processing method and device, electronic equipment and storage medium
CN113975824A (en) Game fighting reminding method and related equipment
JP4864120B2 (en) GAME PROGRAM, GAME DEVICE, GAME CONTROL METHOD
WO2024011785A1 (en) Information processing method and apparatus, and electronic device and readable storage medium
CN113952739A (en) Game data processing method and device, electronic equipment and readable storage medium
CN113599815A (en) Expression display method, device, equipment and medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant