CN113101634B - Virtual map display method and device, electronic equipment and storage medium - Google Patents

Virtual map display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113101634B
CN113101634B CN202110420230.3A CN202110420230A CN113101634B CN 113101634 B CN113101634 B CN 113101634B CN 202110420230 A CN202110420230 A CN 202110420230A CN 113101634 B CN113101634 B CN 113101634B
Authority
CN
China
Prior art keywords
virtual
virtual object
character
map
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110420230.3A
Other languages
Chinese (zh)
Other versions
CN113101634A (en
Inventor
李光
刘超
王翔宇
彭鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110420230.3A priority Critical patent/CN113101634B/en
Publication of CN113101634A publication Critical patent/CN113101634A/en
Application granted granted Critical
Publication of CN113101634B publication Critical patent/CN113101634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a virtual map display method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: and responding to the touch operation aiming at the functional control, displaying a position mark interface in the graphical user interface, and displaying the character identification of at least one second virtual object and/or first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or first virtual object. Based on the scheme, character identifiers of the virtual objects can be displayed according to the uploaded position mark information of the virtual objects controlled by the players in the game scene, and the information uploading process is quick, so that the position information of the virtual objects controlled by the players in the game scene is transmitted and displayed in an efficient and clear mode, the game discussion stage is facilitated to assist the players in carrying out faster reasoning judgment and discussion, and the game efficiency of the players is effectively improved.

Description

Virtual map display method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of man-machine interaction technologies, and in particular, to a virtual map display method, a virtual map display device, an electronic device, and a storage medium.
Background
With the continuous development of the game industry, the game types are expanding, wherein the reasoning game is favored by players with unique charm. Such games require multiple players to participate in the interaction, and players belonging to different camps perform inferential voting while completing a designated task.
During the game discussion phase, the player needs to obtain basic information to make inferences and discussions, for example, the basic information may include: who initiated the discussion, who killed, where the cadaver was located, where each player was located, etc. However, since many players participate in the game, it is difficult to remember the behavior descriptions and position statements of each player in the game scene, and the positions of the players in the game scene may be confused, which tends to reduce the game efficiency of the players.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide a virtual map display method, apparatus, electronic device, and storage medium, which display character identifiers of virtual objects controlled by each player by uploading position mark information of the virtual objects in a game scene, and the process of uploading information is fast, so that the position information of the virtual objects controlled by each player in the game scene is transferred and displayed in an efficient and clear manner.
In a first aspect, an embodiment of the present application provides a virtual map display method, where a graphical user interface is provided by a terminal device, where at least a part of a virtual scene and a first virtual object are displayed on the graphical user interface, and the virtual map display method includes:
controlling the first virtual object to move in a first virtual scene in response to a movement operation for the first virtual object, and controlling a first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and responding to touch operation aiming at the function control, displaying a position mark interface in the graphical user interface, and displaying character identifiers of the at least one second virtual object and/or the first virtual object in the position mark interface according to position mark information reported by the at least one second virtual object and/or the first virtual object.
Preferably, the step of displaying the character identifier of the at least one second virtual object and/or the first virtual object according to the position mark information reported by the at least one second virtual object and/or the first virtual object includes:
determining initial display positions of the character identifiers according to the position mark information, and determining final display positions of the character identifiers according to the distances between the initial display positions;
and displaying the character identification of the at least one second virtual object and/or the first virtual object according to the final display position.
Preferably, the position mark interface comprises a virtual map corresponding to a virtual scene.
Preferably, the position mark information reported by the first virtual object is determined by the following ways:
displaying a position reporting prompt identifier at a map position of the virtual map corresponding to the actual position of the first virtual object in the virtual scene;
and generating position mark information of the first virtual object determined according to the position report prompt identifier in response to the position report trigger operation aiming at the virtual map, wherein the position mark information comprises the map position of the position report prompt identifier in the virtual map.
Preferably, the step of generating the position mark information of the first virtual object determined according to the position report prompt identifier in response to a position report trigger operation for the virtual map includes:
determining the actual position of the first virtual object currently in the virtual scene as position mark information of the first virtual object in response to a trigger operation of a position report control displayed on the virtual map;
or, in response to a position selection operation performed on the virtual map, determining the position selected on the virtual map as position-marker information of the first virtual object.
Preferably, the virtual map display method further includes: constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein a corresponding relation exists between the coordinate position in the two-dimensional coordinate grid and the map position in the virtual map;
the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface comprises at least one step of:
determining a first unoccupied target grid intersection point which is closest to a map position corresponding to the initial display position in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection point in the virtual map according to the corresponding relation to serve as a final display position; and, a step of, in the first embodiment,
And determining a second target grid intersection point which is closest to the map position corresponding to the initial display position and is unoccupied in the two-dimensional coordinate grid in the direction of the map position corresponding to the actual position of the virtual object corresponding to the initial display position in the virtual scene, and determining the map position corresponding to the coordinate position of the second target grid intersection point in the virtual map according to the corresponding relation as a final display position.
Preferably, the character identifier occupies a corresponding display area in the position mark interface, and identity information for indicating the identity of the virtual object is displayed in the character identifier,
and in each character identifier displayed in the position mark interface, a preset gap is reserved between identity information displayed in the character identifiers of two adjacent virtual objects, and display areas occupied by the character identifiers of the two adjacent virtual objects are not overlapped or edges of the display areas are covered.
Preferably, the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface includes:
And in response to detecting that the distance between any two adjacent initial display positions is smaller than a preset gap, adjusting at least one initial display position in the two adjacent initial display positions to determine a final display position of the character identifier.
Preferably, the character identification is displayed in the position-marker interface by:
and displaying the character identifications at final display positions corresponding to the character identifications in the position mark interface, and connecting the character identifications of each virtual object with the initial display positions corresponding to the character identifications.
Preferably, the character identification is displayed in the position-marker interface by at least one of:
responding to the enlarged display skill triggering operation aiming at the position mark interface, and carrying out enlarged display on a target area in the virtual map and/or character identifications of each virtual object in the target area;
reducing the display size of the character mark of each virtual object in the position mark interface;
changing the expression form of the identity information which is displayed in the character identifier and is used for indicating the identity of the virtual object.
Preferably, the character identification is displayed in the position-marker interface by:
Displaying a strategy control on the position mark interface;
responding to the triggering operation for the display strategy control, and determining a display mode for the character identification;
and displaying the character identification of each virtual object in the determined display mode in the position mark interface.
Preferably, the display mode includes at least one of the following:
determining the coverage relationship of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object;
and determining the coverage relationship of the edges of the display area occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
In a second aspect, an embodiment of the present application provides a virtual map display apparatus that provides a graphical user interface through a terminal device, the virtual map display apparatus including:
a first display control module for displaying at least part of a virtual scene and a first virtual object on the graphical user interface;
the mobile control module is used for responding to the mobile operation of the first virtual object, controlling the first virtual object to move in a first virtual scene and controlling the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
The second display control module is used for responding to a preset trigger event and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object;
and the third display control module is used for responding to the touch operation aiming at the functional control, displaying a position mark interface in the graphical user interface, and displaying the character identification of the at least one second virtual object and/or the first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or the first virtual object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium, and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor in communication with the storage medium via the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the virtual map display method as described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the virtual map display method as described above.
The virtual map display method provided by the embodiment of the application comprises the following steps: controlling the first virtual object to move in the first virtual scene in response to the movement operation for the first virtual object, and controlling the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object; and responding to the touch operation aiming at the functional control, displaying a position mark interface in the graphical user interface, and displaying the character identification of at least one second virtual object and/or first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or first virtual object.
According to the virtual map display method, character identifiers of the virtual objects can be displayed according to the uploaded position mark information of the virtual objects controlled by the players in the game scene, the information uploading process is quick, the position information of the virtual objects controlled by the players in the game scene is transmitted and displayed in an efficient and clear mode, information statement of the players in the game discussion stage is reduced, memory burden of the players is reduced, and meanwhile the players are assisted to conduct faster reasoning judgment and discussion in the game discussion stage, so that game efficiency of the players is effectively improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a virtual map display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface corresponding to a discussion phase provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of an interface for displaying a first character identifier according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interface for displaying a second character identifier according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an interface for displaying a third character identifier according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an interface of a first virtual scene according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an interface of a second virtual scene according to an embodiment of the present disclosure;
FIG. 8 is a second diagram of an interface of a first virtual scene according to an embodiment of the present disclosure;
FIG. 9 is a third exemplary interface diagram of a first virtual scene according to an embodiment of the present disclosure;
FIG. 10 is a second diagram of an interface of a second virtual scene according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating movement of a virtual object according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a virtual map display device according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment that a person skilled in the art would obtain without making any inventive effort is within the scope of protection of the present application.
With the continuous development of the game industry, the game types are expanding continuously. Among them, the inferential game is favored by more and more players with its unique charm. Such games require multiple players to participate in the interaction, and players through different camps perform inferential voting while completing a designated task.
Virtual scene:
is a virtual scene that an application displays (or provides) when running on a terminal or server. Optionally, the virtual scene is a simulation environment for the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, ocean and the like, wherein the land comprises environmental elements such as deserts, cities and the like. The virtual scene is a scene of a virtual object complete game logic such as user control.
Virtual object:
refers to dynamic objects that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. The virtual object is a Character that a Player controls through an input device, or is an artificial intelligence (Artificial Intelligence, AI) set in a virtual environment fight by training, or is a Non-Player Character (NPC) set in a virtual environment fight. Optionally, the virtual object is a virtual character playing an athletic in the virtual scene. Optionally, the number of virtual objects in the virtual scene fight is preset, or dynamically determined according to the number of clients joining the fight, which is not limited in the embodiment of the present application. In one possible implementation, a user can control a virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., as well as control the virtual object to fight other virtual objects using skills, virtual props, etc., provided by the application.
Player character:
refers to virtual objects that may be manipulated by players to be active in a gaming environment, which may also be referred to as a god character, hero character in some electronic games. The player character may be at least one of a virtual character, a virtual animal, a cartoon character, a virtual vehicle, or the like.
Game interface:
the interface is an interface corresponding to the application program provided or displayed through a graphical user interface, and the interface comprises a UI interface and a game screen for the player to interact. In alternative embodiments, game controls (e.g., skill controls, movement controls, functionality controls, etc.), indication identifiers (e.g., direction indication identifiers, character indication identifiers, etc.), information presentation areas (e.g., number of clicks, time of play, etc.), or game setting controls (e.g., system settings, stores, gold coins, etc.) may be included in the UI interface. In an alternative embodiment, the game screen is a display screen corresponding to the virtual scene displayed by the terminal device, and the game screen may include virtual objects such as game characters, NPC characters, AI characters, and the like for executing game logic in the virtual scene.
Virtual object:
refers to static objects in a virtual scene, such as terrain, houses, bridges, vegetation, etc. in a game scene. Static objects are often not directly controlled by players, but can respond to the interactive behavior (e.g., attacks, removals, etc.) of virtual objects in a scene to make corresponding presentations, such as: the virtual object may be removed, picked up, dragged, built, etc. from the building. Alternatively, the virtual object may not be able to respond to the interaction of the virtual object, for example, the virtual object may be a building, a door, a window, a plant, etc. in the game scene as well, but the virtual object cannot interact with the building, the door, the window, the plant, etc. for example, the virtual object cannot destroy or remove the window.
The virtual map display method in one embodiment of the present disclosure may be executed on a terminal device or a server. The terminal device may be a local terminal device. When the virtual map display method is run on a server, the virtual map display method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the information processing method are completed on the cloud game server, the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
By way of example only,the game is taken as one of the reasoning games and is favored by wide players. In the reasoning game, a plurality of players participating in the game jointly join in the same game, after entering the game, virtual objects of different players are assigned with different character attributes, such as identity attributes, and different camps are determined by assigning the different character attributes, so that the players win game play by executing tasks assigned by the game at different game playing stages of the game playing. For example, multiple virtual objects with a character attribute may be "eliminated" by "the virtual object with B character attribute during the game play phase to obtain a winning game play. To- >For example, 10 ginseng is typically required to play with the same game, at the beginning of the game, identity information (character attribute) of virtual objects in the game, for example, identity information including civilian identity and champion identity, virtual objects with civilian identity winning a game by completing assigned designated tasks in the game play stage, or virtual objects with champion identity in the current game play are eliminated to win a game; a virtual object with a wolf's identity is eliminated during the game stage by performing an attack on other virtual objects than the wolf's identity to win the game.
In the game-of-game stage in the inference-type game, there are typically two game stages: an action phase and a discussion phase.
During the action phase, each virtual object is typically assigned one or more game tasks. In an alternative embodiment, each virtual object is assigned a respective one or more game tasks, and the player completes game play by controlling the corresponding virtual object to move in the game scene and execute the corresponding game task. In an alternative embodiment, a common game task is determined for virtual objects having the same character attribute in the current game pair; in the action stage, the virtual objects participating in the current game play can freely move to different areas in the game scene in the virtual scene of the action stage to complete the allocated game tasks, wherein the virtual objects in the current game play comprise virtual objects with first character attributes and virtual objects with second character attributes, and in an optional implementation mode, when the virtual objects with the second character attributes move to the preset range of the virtual objects with the first character attributes in the virtual scene, the attack instruction can be responded and the virtual objects with the first character attributes are attacked to eliminate the virtual objects with the first character attributes.
In the discussion phase, discussion functions are provided for virtual objects on behalf of players, through which the behavior of the virtual objects in the action phase is revealed to decide whether to eliminate a particular virtual object in the game at present.
To be used forFor example, the game play includes two phases, an action phase and a discussion phase, respectively. In the action stage, a plurality of virtual objects in the game pair freely move in the virtual scene, and other virtual objects appearing in a preset range can be seen in a game picture displayed through the visual angle of the virtual objects. The virtual object with civilian identity can complete the assigned game task by moving in the virtual scene, the virtual object with the mace identity breaks the task that the virtual object with civilian identity has completed in the virtual scene, or can execute the assigned specific game task, and at the same time, the virtual object with mace identity can also be eliminated by attacking the virtual object with civilian identity in the action stage. When the game play stage goes from the action stage to the discussion stage, the player discusses through the corresponding virtual object in an attempt to respond to the action stage The game behavior of the segment determines virtual objects with the identity of wolves, determines discussion results through voting, determines whether virtual objects needing to be eliminated exist according to the discussion results, eliminates the corresponding virtual objects according to the discussion results if the virtual objects needing to be eliminated exist, and does not exist in the current discussion stage if the virtual objects needing to be eliminated do not exist. In the discussion stage, the discussion may be performed by voice, text, or other manners.
During the game discussion phase, players need to obtain basic information for reasoning, judging and discussing, which may include, by way of example and not limitation: who initiated the discussion, who killed, where the cadaver was located, where each player was located, etc. However, since there are many players involved, it is difficult to remember the behavior description and position statement of each player in the game scene, and the situation that the positions of the players in the game scene are confused may occur, which results in lower game efficiency of the players.
Based on the above, the embodiment of the application provides a virtual map display method, which displays the character identifier of each virtual object by uploading the position mark information of the virtual object controlled by each player in the game scene, and the information uploading process is fast, so that the position information of the virtual object controlled by each player in the game scene is transferred and displayed in a high-efficiency and clear manner, which is helpful for reducing information statement of each player in the game discussion stage, and is also helpful for reducing the memory burden of the player, and assisting the player in faster reasoning judgment and discussion in the game discussion stage, and effectively improving the game efficiency of the player.
In one embodiment of the present application, an implementation environment is provided, which may include: a first terminal device, a game server and a second terminal device. The first terminal device and the second terminal device are respectively communicated with the server to realize data communication. In this embodiment, the first terminal device and the second terminal device are respectively installed with a client for executing the display method of the game process provided by the application, and the game server is a server for executing the display method of the game process provided by the application. The first terminal device and the second terminal device can respectively communicate with the game server through the client.
Taking a first terminal device as an example, the first terminal device establishes communication with the game server through the operation client. In an alternative embodiment, the server establishes game play according to the game request from the client. The parameters of the game play may be determined according to the parameters in the received game request, for example, the parameters of the game play may include the number of people participating in the game play, the role level of the participating game play, and the like. And when the first terminal equipment receives the response of the server, displaying a virtual scene corresponding to the game play through a graphical user interface of the first terminal equipment. In an alternative embodiment, the server determines a target game for the client from the plurality of established game games according to the game request of the client, and when the first terminal device receives the response of the server, the virtual scene corresponding to the game is displayed through the graphical user interface of the first terminal device. The first terminal equipment is equipment controlled by a first user, the virtual object displayed in the graphical user interface of the first terminal equipment is a player character controlled by the first user, and the first user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in the virtual scene.
Taking a second terminal device as an example, the second terminal device establishes communication with the game server through the operation client. In an alternative embodiment, the server establishes game play according to the game request from the client. The parameters of the game play may be determined according to the parameters in the received game request, for example, the parameters of the game play may include the number of people participating in the game play, the role level of the participating game play, and the like. And when the second terminal equipment receives the response of the server, displaying a virtual scene corresponding to the game play through a graphical user interface of the second terminal equipment. In an alternative embodiment, the server determines a target game for the client from the plurality of established game games according to the game request of the client, and when the second terminal device receives the response of the server, the virtual scene corresponding to the game is displayed through the graphical user interface of the second terminal device. The second terminal equipment is equipment controlled by a second user, the virtual object displayed in the graphical user interface of the second terminal equipment is a player character controlled by the second user, and the second user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in the virtual scene.
The server calculates data according to game data reported by the first terminal equipment and the second terminal equipment, and synchronizes the calculated game data to the first terminal equipment and the second terminal equipment, so that the first terminal equipment and the second terminal equipment control rendering of corresponding virtual scenes and/or virtual objects in the graphical user interface according to the synchronous data issued by the server.
In this embodiment, the virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device are virtual objects in the same game play. The virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device may have the same character attribute, or may have different character attributes.
It should be noted that, the virtual object in the current game play may include two or more virtual characters, and different virtual characters may correspond to different terminal devices, that is, in the current game play, there are two or more terminal devices that perform transmission and synchronization of game data with the game server, respectively.
Referring to fig. 1, fig. 1 is a flowchart of a virtual map display method according to an embodiment of the present application. As shown in fig. 1, the embodiment of the application provides a graphical user interface through a terminal device, wherein at least part of a virtual scene and a first virtual object are displayed on the graphical user interface, and the virtual map display method includes:
S110, responding to the movement operation of the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
And S120, responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to the second virtual scene. Here, at least one second virtual object is included in the second virtual scene.
And S130, responding to touch operation for the functional control, displaying a position mark interface in the graphical user interface, and displaying character identifiers of at least one second virtual object and/or first virtual object in the position mark interface according to position mark information reported by the at least one second virtual object and/or the first virtual object.
The terminal device according to the embodiment of the present application mainly refers to an intelligent device that is used for providing a graphical user interface and capable of performing control operations on a virtual object, where the terminal device may include, but is not limited to, any one of the following devices: smart phones, tablet computers, portable computers, desktop computers, digital televisions, gaming machines, and the like. The terminal device has installed and running therein an application program supporting a game, such as an application program supporting a three-dimensional game or a two-dimensional game. In the embodiment of the application, the application program is described as a game application, and optionally, the application program can be a game application program of a network online version or a game application program of a single-machine version.
A graphical user interface is an interface display format in which a person communicates with a computer, allowing a user to manipulate icons or menu options on a screen using an input device such as a mouse, keyboard, joystick, etc., and also allowing a user to manipulate icons or menu options on a screen by performing a touch operation on a touch screen of a terminal device to select a command, start a program, or perform some other task, etc.
After the game client on the terminal equipment responds to the opening operation of the game player, at least part of the virtual scene and a first virtual object positioned in the virtual scene are displayed in the graphical user interface. The opening operation may include an operation of clicking an application on the computer end through a mouse, an operation of clicking or sliding the game APP on the mobile end through a touch screen, or an operation of opening the game client through voice input.
In an embodiment of the present application, the virtual scenes may include the above-mentioned virtual scenes corresponding to action phases of the inference game, and the virtual objects manipulated by each player may be active in the virtual scenes during the action phases, and exemplary, the activities of the virtual objects in the virtual scenes may include, but are not limited to, at least one of the following: walking, running, jumping, climbing, lying down, attacking, releasing skills, picking up props and sending messages. Here, the virtual objects active in the virtual scene may include other virtual objects not being manipulated by players in addition to the virtual objects being manipulated by players. In addition, the virtual scenes may include the above-mentioned virtual scenes corresponding to the discussion stage of the inference game, in which inference voting can be performed for each player.
The first virtual object may refer to a virtual object in the game of an account of a game client logged on the terminal device, i.e. a virtual object manipulated by a player corresponding to the account, but does not exclude the possibility that the first virtual object is controlled by other applications or artificial intelligence modules.
The steps of the foregoing examples provided in the embodiments of the present application will be described below by taking an example in which the foregoing method is applied to a terminal device.
In step S110, in response to the movement operation of the game player with respect to the first virtual object, the first virtual object is controlled to move in the first virtual scene, and the first virtual scene range displayed in the graphical user interface is controlled to correspondingly change according to the movement of the first virtual object.
The moving operation in the embodiment of the application is issued to the terminal device by the game player, and is used for controlling the first virtual object to move in the first virtual scene of the graphical user interface. The terminal equipment responds to the moving operation, can control the first virtual object to move in the first virtual scene, and along with the movement of the first virtual object, the position of the first virtual object in the first virtual scene can correspondingly change, namely, the terminal equipment responds to the moving operation, and can also control the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
For example, the game screen displayed in the graphical user interface may be a screen obtained by taking the first virtual object as an observation center and observing the first virtual scene, and when the first virtual object in the first virtual scene is controlled to move, the game screen moves along with the movement, that is, the observation center of the game screen is bound with the position of the first virtual object, so that the observation center moves along with the movement of the position of the first virtual object. However, the present application is not limited thereto, and other observation positions in the virtual scene may be taken as an observation center, as long as the first virtual object is included in the displayed first virtual scene and the displayed first virtual scene range changes correspondingly following the movement of the first virtual object.
For example, the process of controlling the first virtual object to move in the first virtual scene may include: receiving a selection operation of a game player for a first virtual object, and controlling the first virtual object to move in a first virtual scene in response to a dragging operation for the selected first virtual object; alternatively, a selection operation of the game player with respect to the first virtual object may be received, and the first virtual object may be controlled to move to the selected position in response to a position selection operation performed in the first virtual scene. As an example, the move operation may include, but is not limited to: clicking the first virtual object on the computer end through the left button of the mouse without loosening, dragging the mouse, and changing the position of the first virtual object in the first virtual scene; or on the mobile end, the first virtual object is not loosened by long-pressing by the finger, and the position of the first virtual object in the first virtual scene is changed by sliding the finger on the graphical user interface.
Further, as the first virtual scene range changes according to the movement of the first virtual object, a second virtual object may be included in the changed first virtual scene (here, a second virtual object may also be included in the first virtual scene before the change), where the second virtual object is a virtual object controlled by other players in the current game play pair. Similarly, the terminal devices of other players respond to the moving operation issued by the terminal devices, and can also control the second virtual object to move in the first virtual scene.
In the embodiment of the application, the first virtual object can execute the task appointed by the system in the action stage so as to achieve the purpose of completing the task to obtain the win; the same is true of the second virtual object in the action phase. If the first virtual object is a virtual object with a second role attribute and the second virtual object is a virtual object with the first role attribute, the first virtual object can tamper with or kill the second virtual object or complete a task formulated for the virtual object with the second role attribute when the second virtual object executes the task; if the first virtual object is a virtual object with a first character attribute and the second virtual object is a virtual object with a second character attribute, the same procedure as above can be performed; if the first virtual object and the second virtual object are both virtual objects with the first character attribute, the first virtual object and the second virtual object can jointly or respectively execute tasks; if the first virtual object and the second virtual object are both virtual objects having the second role attribute, the first virtual object and the second virtual object may together or separately search for virtual objects having the first role attribute to interfere with the virtual objects having the first role attribute to perform tasks, kill virtual objects having the first role attribute, or complete tasks formulated for virtual objects having the second role attribute.
In step S120, the trigger event refers to an event for triggering virtual scene switching. In an alternative embodiment, the trigger event is a preset range of the second virtual object that controls the first virtual object to move to a specific state in the virtual scene, for example, when the second virtual object in the "dead" state exists in the virtual scene, the first virtual object is controlled to move to the periphery of the second virtual object in the "dead" state in the virtual scene. In an alternative embodiment, the triggering event is a switching operation for triggering a switch from a first virtual scene to a second virtual scene, which may include, but is not limited to: for example, the method may include an operation for a return option displayed on the graphical user interface to exit the first virtual scene and return to display the second virtual scene, or an operation for a start option displayed on the graphical user interface to switch from the first virtual scene to the second virtual scene. The graphical user interface is switched from the first virtual scene to the second virtual scene in response to a preset trigger event, and the second virtual scene comprises the first virtual object and at least one second virtual object and can also comprise a third virtual object.
For example, the virtual object displayed in the second virtual scene may refer to a character model of the second virtual object, or may be a character icon of the second virtual object, and similarly, the second virtual scene may also display a character model of the first virtual object and/or the third virtual object, or may display a character icon of the first virtual object and/or the third virtual object. The display manner of the virtual object in the first virtual scene is similar to the display manner of the virtual object in the second virtual scene, and the description thereof is omitted herein.
In this embodiment, the second virtual scene may be a virtual scene corresponding to the above-mentioned discussion stage. In the discussion stage, the terminal equipment responds to the ending operation of the action stage or the opening operation of the discussion stage to enter the discussion stage; in the discussion phase, virtual objects with the first character attribute and virtual objects with the second character attribute in the surviving state and virtual objects with the first character attribute and virtual objects with the second character attribute in the obsolete state in the action phase may be displayed in the second virtual scene, wherein voting cannot be performed at this stage for virtual objects with the first character attribute and virtual objects with the second character attribute in the obsolete state.
In an optional implementation manner, the terminal device acquires position mark information reported by the first virtual object and/or at least one second virtual object in the local game where the first virtual object is located.
In an optional embodiment, in response to a touch operation for the functionality control, the terminal device obtains position mark information reported by the first virtual object and/or at least one second virtual object in the local game where the first virtual object is located.
It should be understood that the step of obtaining the position mark information reported by the first virtual object and/or the at least one second virtual object in the game where the first virtual object is located may precede step S130; or after step S130; or after the step of "responding to the touch operation for the function control" in step S130.
In an alternative embodiment, the location mark information may be actively reported by the first virtual object and/or the at least one second virtual object when a certain condition is reached, or the terminal device may respond to a certain condition, so as to actively obtain the location mark information of the first virtual object and/or the at least one second virtual object.
For example, the position mark information reported by the first virtual object and/or at least one second virtual object in the local game where the first virtual object is located may be acquired in response to any one of the following conditions: (1) detecting entry into a second virtual scene; (2) Detecting that the survival state of any virtual object in the virtual environment is changed; (3) Detecting a position report request initiated by any virtual object in the game; (4) A touch operation is detected for a functionality control displayed on the graphical user interface.
For the condition (1), the terminal equipment responds to switching from the first virtual scene to the second virtual scene, and acquires position mark information reported by the first virtual object and/or at least one second virtual object in the current game pair. That is, the terminal device acquires the position mark information of each virtual object in response to entering the voting phase. For example, when a certain game player suddenly initiates a discussion of entering a discussion stage (a second virtual scene), since the discussion needs to be performed with reference to the position information of each game player in the first virtual scene, each game player needs to report the position mark information of the game player in the first virtual scene before entering the second virtual scene when entering the second virtual scene. Wherein, whether each game player reports the position mark information in the first virtual scene or whether each game player reports the real position mark information is not mandatory.
For the condition (2), the terminal equipment responds to the detection of the change of the survival state of any virtual object in the virtual environment, and acquires the position mark information reported by the first virtual object and/or at least one second virtual object in the current game. For example, if a virtual object controlled by a certain player dies in a first virtual scene, the position mark information of the dead player at the dead position is automatically reported to the game server, and the game server shares the position mark information of the dead virtual object at the dead position with other players in the current game pair, so that the other players can see the dead position of the dead player in the first virtual scene, and at the moment, the other players can report the position mark information of the dead player based on the position sharing, so that the game server plays an auxiliary role in reasoning and discussion of each player in the discussion stage. Similarly, whether or not each player reports its own position mark information based on the above-described position sharing is not mandatory.
For the condition (3), position information reported by the first virtual object and/or at least one second virtual object in the game where the first virtual object is located is acquired in response to a position reporting request initiated for any virtual object in the game. For example, taking the position report request initiated by the first virtual object as an example, at this time, the terminal device may respond to the position report request initiated by the first virtual object to obtain the position mark information reported by the first virtual object, and at the same time, the terminal device may also send the position report request to the game server, where the game server shares the position report request with other players in the current game pair to obtain the position mark information reported by the second virtual object controlled by the other players.
And (3) for the condition (4), responding to the touch operation of the functional control displayed on the graphical user interface, and acquiring the position mark information reported by the first virtual object and/or at least one second virtual object in the game where the first virtual object is located.
Here, the functionality control refers to a control for reporting a position of a virtual object in the first virtual scene, and as an example, the functionality control may be a control displayed on a graphical user interface and having words such as "report" or "position report", or may be a control displayed with words such as "map". By way of example, the functionality control may be a circular control, a square control, or an irregular graphical control, which may be disposed on the graphical user interface near a boundary. At any moment, the player can perform position reporting operation through the function control.
The method includes that after position mark information is reported by a player, a reminding mark is arranged on a functional control, and the reminding mark is used for reminding the player of reporting the position mark information in the game process, for example, the reminding mark can be a small red point; after the reminding mark is seen by other players, the other players can respond to the touch operation of the functional control, so that the graphical user interface displayed with the position mark information reported by the players is opened. At this time, the player can check the position information of the player who reported the position mark information in the virtual scene before, and can report the position information of the player in the virtual scene at the same time, or can simply check the position information of other players instead of reporting the position information of the player. The touch operation may be a sliding operation or a clicking operation, that is, the position mark information reported by at least one second virtual object and/or the first virtual object may be obtained in response to the sliding operation or the clicking operation for the function control.
In step S330, in response to the touch operation for the functional control, a position mark interface is displayed in the graphical user interface, and in the position mark interface, the character identifier of at least one second virtual object and/or first virtual object is displayed according to the position mark information reported by at least one second virtual object and/or first virtual object. For example, after the touch operation for the functional control is responded, a position mark interface is displayed in the graphical user interface, and a corresponding role identifier is displayed in the position mark interface according to the position mark information of each virtual object.
Here, the position mark interface refers to an interface capable of displaying character identifications corresponding to position mark information reported by each virtual object. In one embodiment, a virtual map corresponding to a virtual scene is included in the location marking interface. The virtual map can selectively represent graphics or images on a virtual scene in a game on a plane or a sphere in a two-dimensional or multi-dimensional form and means, and the distribution characteristics and the interrelationships of the graphics or images in the virtual scene are reflected in a certain proportion, and in the embodiment, character identifiers are displayed at different positions of the virtual map so as to represent the positions of virtual objects represented by the character identifiers in a first virtual scene.
Character identification refers to being able to display identity information indicating the identity of a virtual object. Specifically, the display modes of the character identifier can be divided into two types: the first is to screen and subtract the original identity information of the virtual object, and only display a part of content which can represent the identity information, such as key information in the display name; the second is to use a label, a character or a color to represent the identity information of the virtual object, such as a number (1, 2, 3 and …), letters (a, b and c …), or colors (red, yellow and blue …), wherein when the label, the character or the colors are used to represent the identity information of the virtual object, an identity information comparison list can be displayed on one side of the graphical user interface, and the identity information comparison list can indicate the identity information of the virtual object corresponding to the numbers 1, 2 and 3 … respectively; or indicating the identity information of the virtual objects corresponding to the a, the b and the c … respectively; or the identity information indicating the virtual objects corresponding to red, yellow and blue … respectively, where the identity information comparison list can assist the player in quickly identifying the marking positions of the virtual objects controlled by other players on the position marking interface.
For example, as shown in fig. 2, in the second virtual scenario 210 (discussion stage), a functionality control 220 (as shown in the figure as a "map" control) is displayed, and a first virtual object 230, a second virtual object 240 and a third virtual object 250 for preparing a discussion vote, where the first virtual object 230 is a virtual object in a game of an account of a game client logged on the terminal device, the second virtual object 240 is a virtual object in a living state except for the first virtual object 230, and the third virtual object 250 is a virtual object in a dead state except for the first virtual object 230. Specifically, in response to a touch operation (e.g., a click operation) for the functionality control 220, a position-marker interface may be displayed in the second virtual scene 210. Specifically, as shown in fig. 3, a flower nursery 311, a library 312, a health care room 313, a dormitory 314, a hall 315, and the like are displayed in the position mark interface 310, a character identifier 320 corresponding to the virtual object with the position mark information uploaded therein is provided in the position mark interface 310, the character identifiers indicate identity information by numerals 1, 2, 3, 4, and 5, wherein the virtual object No. 1 is located in the library 312,2 and the health care room 313, the virtual object No. 3 is located in the hall 315, the virtual object No. 4 is located in the flower nursery 311, the virtual object No. 5 is located in the dormitory 314, and the character identifier occupies a corresponding display area in the position mark interface by a circle. It should be understood that, in the embodiment of the present application, the position mark interface may also be displayed in the first virtual scene.
According to the method and the device for uploading the position mark information of each player in the game, the position mark information of each player in the game is uploaded, the information uploading process is rapid, the position information of each player in the game is transmitted and displayed in an efficient and clear mode, information statement of the player in the game discussion stage is reduced, the memory burden of the player is effectively reduced, meanwhile, the player is assisted to conduct faster reasoning judgment and discussion in the game discussion stage, and the game efficiency of the player is effectively improved.
However, in some cases, there may be location mark information reported by multiple players in the same area, as shown in fig. 4, where the character identifiers 320 are displayed in a certain area on a location mark interface 310 at the same time, which causes different character identifiers to be blocked from each other, so that the player cannot see which character identifier is specifically shown by which player, where each character identifier is recorded with the number of the corresponding player, such as 1, 2, 3, …, etc.
Based on this, the embodiment of the application may also adjust the display position of each character identifier displayed in the position mark interface, so as to avoid that each character identifier is blocked.
That is, in an alternative embodiment of the present application, the character identifiers of each virtual object may be displayed in the position mark interface, where the first distance between the character identifiers of two adjacent virtual objects is greater than the preset distance, that is, there is a preset gap between the character identifiers of two adjacent virtual objects, so that each character identifier is dispersed, not blocked (i.e., not overlapped) or a blocked part of the character identifiers occupy a corresponding display area in the position mark interface, but the player can clearly see the identities of the virtual objects referred to by each character identifier. Here, the first distance between the character identifiers refers to a distance between center points of the character identifiers, the character identifiers are identifiers occupying a certain area, for example, circular identifiers having a radius, and by adjusting the distance between the center points of the character identifiers, the shielding area between the character identifiers is adjusted, for example, a gap is formed between any two adjacent character identifiers, or a certain coincidence exists between any two adjacent character identifiers only at an edge.
Here, the character identifier occupies a corresponding display area in the position-marker interface, and identity information for indicating the identity of the virtual object is displayed in the character identifier. In a preferred embodiment, in each character identifier displayed in the position mark interface, the character identifiers of two adjacent virtual objects are not blocked, which means that a preset gap exists between identity information displayed in the character identifiers of two adjacent virtual objects, and display areas occupied by the character identifiers of two adjacent virtual objects may not overlap or edges of the display areas may be covered.
Illustratively, the character identifier includes an occupied display area, such as a circle shown in fig. 3, where the circle represents the display area, 1,2,3, etc. represent identity information indicating the identity of the virtual object, and the preset gap refers to that there is a gap between the identity information, that is, there is a gap between the numbers, and where there may be overlap between the display areas, such as where edges are covered. Here, if the identity information is represented by using color, it may be defined that the overlapping ratio of the display areas occupied by the character identifiers of the two adjacent virtual objects is smaller than a preset ratio, where the overlapping ratio refers to a ratio of the overlapping areas of the display areas occupied by the character identifiers of the two virtual objects to the respective display areas, the preset ratio may be set to 30% or 40%, and the specific value may be adjusted according to the actual situation.
When the method is implemented, the initial display positions of the character identifiers are determined according to the position mark information, and the final display positions of the character identifiers are determined according to the distances between the initial display positions; and displaying the character identification of the at least one second virtual object and/or the first virtual object according to the final display position. Namely: and determining initial display positions of the character identifiers according to the position mark information, adjusting the initial display positions according to the distance between the initial display positions to determine final display positions of the character identifiers in the position mark interface, and displaying the character identifiers of at least one second virtual object and/or first virtual object according to the final display positions.
Here, the initial display position refers to a display position of the virtual object in the first virtual scene, which is determined according to the position mark information reported by the player, and the initial display position is not adjusted by the system, for example, the initial display position is a display position which is not adjusted after the position information of the virtual character reported by the first terminal device or the second terminal device received by the game server; the final display position refers to a display position after each initial display position is adjusted according to a distance between the initial display positions, and the final display position is adjusted by the system. For example, the game server adjusts each initial display position according to the distance between each initial display position to determine a final display position, and sends the final display position to each terminal device, and the terminal device controls and displays the character identifications according to the final display position, so that a preset gap is formed between the character identifications of the two virtual objects, and mutual shielding between the character identifications is avoided.
And in actual adjustment, at least one initial display position in any two adjacent initial display positions is adjusted in response to the fact that the distance between any two adjacent initial display positions is smaller than a preset gap, so that the final display position of the character identifier is determined. Here, the time for each virtual object to report the position mark information may be the same, that is, reporting the position mark information at the same time, or the time for each virtual object to report the position mark information may be different, that is, there is a time difference between the time for each virtual object to report the position mark information, at this time, whether the distance between the initial display position corresponding to the currently reported position mark information and the display position of each character mark existing in the position mark interface is smaller than the preset gap may be detected, and if the distance is smaller than the preset gap, the initial display position corresponding to the currently reported position mark information may be adjusted, or the display position of each existing character mark may be adjusted.
Here, when the distance between any two adjacent initial display positions is detected to be smaller than the preset gap, the initial display position of any one character identifier can be adjusted, and the initial display positions of the two character identifiers can be adjusted at the same time, so that the final display position of the character identifier is determined.
Specifically, if the distance between any two adjacent initial display positions is not smaller than the preset gap, the initial display positions of the character identifiers do not need to be adjusted, that is, the corresponding character identifiers are directly displayed at the initial display positions.
When the position adjustment mode is applied to the reasoning game, the position adjustment mode is realized through the following steps: after receiving the position mark information uploaded by the player through the terminal equipment, the game server determines the minimum distance between character identifications corresponding to any two position mark information (the minimum distance does not cause excessive overlapping); and re-planning the position of each character mark in the virtual map according to the calculated minimum distance, and displaying the position in the virtual map. It should be noted that, the virtual scene corresponding to the case site is displayed on the virtual map, and the position reported by other people can be seen only after the player reports the position by himself, and the position reported by other people can be seen when the player does not report the position by himself.
The following describes an example of reporting position mark information by the first virtual object. In addition, since the manner of reporting the position mark information by the second virtual object is the same as the manner of reporting the position mark information by the first virtual object, the manner of reporting the position mark information by the second virtual object is not described in detail.
The position mark information reported by the first virtual object is determined by the following modes:
displaying a position reporting prompt identifier at a map position of the virtual map corresponding to the actual position of the first virtual object in the virtual scene; and generating position mark information of the first virtual object determined according to the position report prompt identifier in response to the position report trigger operation aiming at the virtual map. Here, the position mark information includes a position report hint indicating a map position in the virtual map.
Here, the location reporting cue identification is used to cue the location where the player is currently located in the virtual scene, for example, to cue the real map location where the virtual object is currently located in the virtual map. The position report triggering operation refers to an operation for reporting position mark information, and the triggering operation may be a click operation or a sliding operation.
The position reporting control is displayed on the virtual map, a position reporting prompt identifier is popped up on the virtual map at a position corresponding to a real map position where the virtual object is currently located in the virtual map in response to clicking or sliding operation of the position reporting control, the position reporting prompt identifier is used for indicating position information of the virtual object on the first virtual scene, a map position corresponding to the position reporting prompt identifier can be used as position mark information to report to the game server in response to position confirmation operation, and then a character identifier corresponding to the virtual object is generated on the virtual map at the map position corresponding to the position mark information, so that other players know the position information of the player reporting the position mark information in the first virtual scene through the character identifier. In addition, if the player does not want other players to know their positions in the first virtual scene, the position mark information may not be reported or the position information of the first virtual object indicated by the position mark information may be changed on the first virtual scene.
Specifically, in response to a trigger operation for a position reporting control displayed on the virtual map, an actual position of the first virtual object currently in the virtual scene is determined as position mark information of the first virtual object.
The position report control refers to a control for reporting position mark information of the virtual object to the game server, the position mark information can be determined based on a position report prompt identifier, and the position report control can be a button. For example, in response to a trigger operation for the position report control, a position report prompt identifier is popped up on the virtual map at a position corresponding to a real map position where the virtual object is currently located in the virtual map, and in response to a position confirmation operation, an actual position of the virtual object in the virtual scene is reported as position mark information, so that the actual position of the virtual object is displayed on the virtual map.
In addition, the position selected on the virtual map may be determined as the position-marker information of the first virtual object in response to a position-selection operation performed on the virtual map.
The position selection operation refers to an operation of reselecting the position of the first virtual object in the virtual scene, and the reselected position is no longer the real position of the first virtual object in the virtual scene, where the position selection operation may be an operation of selecting the position report prompt identifier first, and then dragging the position report prompt identifier on the graphical user interface, where the map position where the position report prompt identifier is dragged may be determined as position mark information of the virtual object. Alternatively, the position selecting operation may be a click operation on a target map position on the virtual map, and the position reporting cue identifier may be displayed at the target map position, and the position of the target map position may be determined as position mark information of the virtual object.
Under the above situation, the position mark information reported by the player is not the real position of the player in the virtual environment, but the false position information, because the displayed position reporting prompt identifier can be dragged, the position reporting prompt identifier is dragged to another position in the virtual map and then reported, and the reported position information is the position information after being dragged by the player.
In this embodiment, during the course of the discussion phase, the player uploads the position of the virtual object controlled by the player to the game server in the action phase, for example, the position when a certain player actively initiates a vote, for example, the position of the player when the player finds a carcass, for example, the position of the carcass, and synchronizes the uploaded position to all players, so that the player can be helped to select the voting object according to the position of the player uploaded to the game server.
Before each player uploads the own position mark information to the server, the position of the player when the player initiates the voting is shown in the virtual map of the game on the terminal device of the player. The player may then drag the location report hint indicator indicating his location information to change his location for interference with other players. After the drag position reports the prompt identifier, the player may click a confirm button or an upload button with a similar function to share the dragged position mark information to all other players, where all other players may see the position shared by the player in the virtual map of their game, which is referred to herein as the position after dragging.
In the above way, each player can finally see the shared position of all people, which can be the dragged position or the non-dragged position; in the game process, after all players do not necessarily share the positions, the position information is displayed in the game interface of each player at the same time, and the position of the player can be shared by which player firstly shares the positions and then the positions of the players are shared by other people; at the same time, other players may also choose not to share their own positions.
For example, as shown in fig. 5, in the position mark interface 310, no shielding of identity information exists between the final display positions of the character identifiers 320, so that the player can clearly view the positions of other players on the virtual map, and the player is assisted in performing faster reasoning judgment and discussion in the game discussion stage.
The adjustment method for adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface mainly comprises the following two modes:
the first adjustment mode is as follows: and constructing a two-dimensional coordinate grid corresponding to the virtual map, and performing position fine adjustment on each initial display position based on the constructed two-dimensional coordinate grid. Here, there is a correspondence between the coordinate position in the two-dimensional coordinate grid and the map position in the virtual map.
In an example, a first target grid intersection point which is closest to a map position corresponding to an initial display position in the two-dimensional coordinate grid and is unoccupied is determined, and the map position corresponding to the coordinate position of the first target grid intersection point in the virtual map is determined according to the corresponding relation, so as to be used as a final display position.
In another example, a second target grid intersection point which is closest to the map position corresponding to the initial display position in the direction of the map position corresponding to the actual position of the virtual object corresponding to the initial display position in the virtual scene and is unoccupied is determined in the two-dimensional coordinate grid, and the map position corresponding to the coordinate position of the second target grid intersection point in the virtual map is determined as the final display position according to the correspondence relation. That is, the adjustment direction for the initial display position is a direction to move toward the map position corresponding to the actual position where the virtual object is located in the virtual scene.
Here, the two-dimensional coordinate grid refers to a plane rectangular coordinate system composed of two axes perpendicular to each other on the same plane and having a common origin, the plane rectangular coordinate system corresponding in proportion to the virtual map. Wherein the map position in the virtual map corresponds to the coordinate position of the planar rectangular coordinate system. Specifically, when constructing a two-dimensional coordinate grid, the system can directly construct on the virtual map according to the corresponding relation with the virtual map; the construction may also be performed based on the correspondence between the actual virtual scenes corresponding to the virtual map.
In actual adjustment, determining an unoccupied grid intersection point which is closest to a coordinate position corresponding to an initial display position in a two-dimensional coordinate grid as an adjusted final display position; the grid intersection point which is closest to the coordinate position indicated by the initial display position in the direction corresponding to the actual position of the at least one virtual object in the virtual environment and is unoccupied may also be determined as the adjusted final display position.
Specifically, after each player uploads the position information, the system can mark the uploaded position information and judge which grid intersection points are occupied, so that the system can avoid the occupied grid intersection points when adjusting the position of the character identifier, thereby saving system resources and reducing the processing time of a program.
The second adjustment mode is as follows: and displaying the character identifications at final display positions corresponding to the character identifications in the position mark interface, and connecting the character identifications of each virtual object with the initial display positions corresponding to the character identifications.
The specific implementation mode is as follows: and adjusting the initial display position of the character mark in the position mark interface, determining the position of the character mark after adjustment in the position mark interface as a final display position, connecting the initial display position before the character mark is adjusted with the final display position after the adjustment, and displaying the final display position and the connecting line in the position mark interface, wherein one end of the connecting line, which is far away from the final display position, is the initial display position of the character mark, but the initial display position is not displayed, so that the initial display position of the character mark in the position mark interface can be designated through the final display position of the character mark and the connecting line.
Preferably, when adjusting the initial display position of the character identifier in the position-marking interface, the following conditions may be defined: firstly, connecting lines between a final display position and an initial display position corresponding to any character identifier are not crossed; and secondly, the final display positions corresponding to the arbitrary character identifiers are not overlapped.
Here, it should be understood that, when each initial display position is adjusted based on the two-dimensional coordinate grid constructed, the adjustment for the position is relatively fine, and in the second adjustment manner described above, the distance adjustment between the adjusted final display position and the corresponding initial display position may be relatively large, so that the intersection of the connecting lines and the overlapping between the final display positions are avoided.
It should be understood that, in the above-listed adjustment methods, the initial display position is adjusted so as not to block each other between the adjacent two character identifications, but the present application is not limited thereto, and the display method for the character identifications may be changed so as not to block each other between the adjacent two character identifications, or the two methods may be combined, that is, the initial display position is adjusted while the display method for the character identifications is changed so as not to block each other between the adjacent two character identifications.
In this embodiment of the present application, the display manner for the character identifier may be changed in the position mark interface by any one of the following manners:
the first way is: and in response to the enlarged display skill triggering operation aiming at the position mark interface, the role identification of the target area and/or each virtual object in the target area in the virtual map is enlarged and displayed.
Here, the enlarged display skill may enlarge only the target area in the virtual map, may enlarge only the character identifier of the virtual object, or may simultaneously enlarge the target area in the virtual map and the character identifier of the virtual object.
The second way is: the display size of the character identifier of each virtual object in the position-marker interface is reduced.
Here, in response to the reduced display skill trigger operation for the character identification of each virtual object, the character identification of each virtual object is reduced and displayed so that the player can see the identity information indicated by the character identification in the position mark interface clearly.
Third mode: the representation of identity information displayed in the character identity for indicating the identity of the virtual object is changed.
Here, the virtual object identity may be indicated with preset characters, wherein the preset characters may be numerals 1, 2, 3 …, letters a, b, c …; the virtual object identity may also be indicated in a predetermined fill color, wherein an identity information comparison list may be added when the virtual object identity is indicated in the fill color.
In a preferred embodiment, the character identification is also displayed in the location marking interface by: displaying a strategy control on the position mark interface; responding to the triggering operation for the display strategy control, and determining the display mode for the character identification; the character identification of each virtual object is displayed in the determined display mode in the position mark interface.
The display policy control is used for displaying the display modes of the character identifiers, and the display modes of the character identifiers displayed in the display policy control can be opened by responding to the triggering operation of the display policy control, so that one mode can be selected at will from the display modes.
Here, the display manner may include at least one of:
the display mode is as follows: and determining the coverage relationship of the edges of the display areas occupied by the character identifications of the two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object.
Here, the character identifiers are displayed according to the time sequence of the reporting position mark information, wherein the character identifiers of the virtual objects which are reported first are displayed first, the character identifiers of the virtual objects which are reported later are displayed later, and the display area occupied by the character identifiers of the virtual objects which are reported later is displayed below the display area occupied by the character identifiers of the virtual objects which are reported earlier, so that the display area occupied by the character identifiers of the virtual objects which are reported first can be displayed completely, and the character identifiers of other virtual objects can be displayed below the display area occupied by the character identifiers of the virtual objects which are reported first in sequence according to the time sequence of reporting.
Another display mode is as follows: and determining the coverage relationship of the edges of the display area occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
Here, the identity type may refer to a character type corresponding to the virtual object model, and a display priority of the character type corresponding to the virtual object model is defined in advance, so that each character identifier is displayed according to the display priority. In this way, when in display, the character identifiers of the virtual objects can be displayed according to the predefined display priority, so as to determine the coverage relationship of the edges of the display area occupied by the character identifiers of the two adjacent virtual objects.
According to the virtual map display method, the position mark information of each player in the game is uploaded, and the information uploading process is fast, so that the position information of each player in the game is transmitted and displayed in an efficient and clear mode; meanwhile, the position mark information can be automatically adjusted, so that shielding among character identifications displayed in the virtual map is prevented, information statement of a player in a game discussion stage is reduced, memory burden of the player is effectively reduced, the player can clearly check positions of other players on the virtual map, the player is assisted in carrying out faster reasoning judgment and discussion in the game discussion stage, and game efficiency of the player is effectively improved.
Wherein the functions occurring during the action phase typically have the following first to eighth functions and during the discussion phase typically have the first, second and seventh functions.
First, the present embodiment provides a display function of a virtual map. Controlling the first virtual object to move in the first virtual scene in response to the movement operation for the first virtual object, and controlling the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; and responding to a preset trigger event, and controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object.
In this embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects can move, can also perform game tasks, or perform other interactive operations. The user issues a movement operation for the first virtual object, and controls the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position of a relative center of the first virtual scene range displayed in the graphical user interface. The virtual camera in the first virtual scene moves following the movement of the first virtual object, thereby causing the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The virtual objects participating in the game of the authority are in the same first virtual scene, so in the moving process of the first virtual object, if the first virtual object is close to other virtual objects, other virtual objects may enter the first virtual scene range displayed in the graphical user interface, and the virtual objects are virtual roles controlled by other players. As shown in fig. 6, two nearby second virtual objects are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the player controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in a plurality of survival states, and the second virtual objects in the plurality of survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily found by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position where the target virtual object is located in the first virtual scene, the designated operation is executed on the target virtual object, and then the target virtual object enters a target state.
And after the preset trigger event is triggered, displaying a second virtual scene in the graphical user interface. For example, the triggering event may be a specific triggering operation, and any virtual object in the surviving state may perform the triggering operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so as to implement the switching of the virtual scene from the first virtual scene to the second virtual scene, and all virtual objects in the game of authority move from the first virtual scene to the second virtual scene. The second virtual scene includes at least one of a character model of the second virtual object or a character icon of the second virtual object in addition to the first virtual object or the character icon of the first virtual object, where the character icon may be an avatar, a name, or the like of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right of speaking discussion and voting, but the target virtual object enters a target state, so that at least part of interaction modes configured in the second virtual scene of the target virtual object are in a limited use state; the interaction modes can comprise speaking discussion interaction, voting interaction and the like; the limited use state may be that a certain interaction mode is not available, or that a certain interaction mode is not available for a certain period of time, or that the number of times of a certain interaction mode is limited to a specified number of times.
As shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in a surviving state are included, including a first virtual object, the first virtual object may send discussion information through a click input control and a voice translation control on the right side, the discussion information sent by the virtual object may be displayed on a discussion information panel, and the discussion information may include who initiates a discussion, who is attacked, a location of the attacked virtual object, a location of each virtual object when initiating the discussion, and so on.
The user can vote on a certain virtual object in the second virtual scene by clicking the virtual object, and a voting button for the virtual object can be displayed nearby the virtual object. Or clicking the ticket discarding button to discard the current voting authority.
And responding to the touch operation aiming at the functional control, displaying a position mark interface in the graphical user interface, and displaying the character identification of at least one second virtual object and/or first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or first virtual object. For a specific implementation of this process, reference is made to the above-described embodiments.
Second, the present embodiment provides an information display function of a virtual object. Displaying a first virtual scene in a graphical user interface and a first virtual object located in the first virtual scene; in response to a movement operation for the first virtual object, the first virtual object is controlled to move in the first virtual scene, and the first virtual scene range displayed in the graphical user interface is controlled to correspondingly change according to the movement of the first virtual object.
In this embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects can move, can also perform game tasks, or perform other interactive operations. The user issues a movement operation for the first virtual object, and controls the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position of a relative center of the first virtual scene range displayed in the graphical user interface. The virtual camera in the first virtual scene moves following the movement of the first virtual object, thereby causing the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The virtual objects participating in the game of the authority are in the same first virtual scene, so if the first virtual object is close to other virtual objects in the moving process of the first virtual object, other virtual objects may enter the first virtual scene range displayed in the graphical user interface, and the virtual objects are roles controlled by other players or virtual roles controlled by non-players. As shown in fig. 6, two nearby second virtual objects are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from at least one second virtual object in a survival state and/or at least one third virtual object in a death state, and the at least one second virtual object in the survival state can be understood as other virtual objects in the survival state except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily found by other virtual objects during attack as the target virtual object, or select a virtual object with suspicious identity information that is inferred based on the position, behavior, and the like. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position where the target virtual object is located in the first virtual scene, or the target virtual object is selected to execute the specified operation on the target virtual object, and then the target virtual object enters the target state.
For example, remark prompting information of at least one second virtual object can be displayed in a graphical user interface in response to the remark adding operation; and responding to the triggering operation aiming at the remark prompting information, and adding remark information to the target virtual object in the displayed at least one second virtual object. At this time, the remark information may be displayed on the circumference side of the target virtual object in the first virtual scene, that is, when the first virtual object moves in the first virtual scene according to the movement operation, and the first virtual scene range displayed in the graphical user interface is controlled to correspondingly change according to the movement of the first virtual object, if the target virtual object appears within the preset range of the first virtual object, the player may see the target virtual object and the remark information of the target virtual object through the first virtual scene presented in the graphical user interface.
And after the preset trigger event is triggered, displaying a second virtual scene in the graphical user interface. For example, the triggering event may be a specific triggering operation, and any virtual object in the surviving state may perform the triggering operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so as to implement the switching of the virtual scene from the first virtual scene to the second virtual scene, and all virtual objects in the game of authority move from the first virtual scene to the second virtual scene. The second virtual scene includes at least one of a first virtual object or a character model of the second virtual object, a character icon, and a character model of the first virtual object or a character model of the first virtual object, where the character icon may be a head portrait, a name, or the like of the virtual object.
In the second virtual scene, the virtual object in the surviving state has the right to speak discussion and vote, and if the target virtual object enters the target state (such as the remark information is added), the current player can see the target virtual object and the remark information of the target virtual object through the second virtual scene presented in the graphical user interface. In addition, the second virtual scene is also provided with an interaction mode, wherein the interaction mode can comprise speaking discussion interaction, voting interaction, remarking interaction and the like; the limited use state may be that a certain interaction mode is not available, or that a certain interaction mode is not available for a certain period of time, or that the number of times of a certain interaction mode is limited to a specified number of times. Illustratively, a virtual character in a dead state is restricted from using voting interactions, and a virtual character in a dead state and having a known identity is restricted from using remarking interactions.
As shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in a surviving state are included, including a first virtual object, the first virtual object may send discussion information through a click input control and a voice translation control on the right side, the discussion information sent by the virtual object may be displayed on a discussion information panel, and the discussion information may include who initiates a discussion, who is attacked, a location of the attacked virtual object, a location of each virtual object when initiating the discussion, and so on.
The user can vote on a certain virtual object in the second virtual scene by clicking the virtual object, and a voting button for the virtual object can be displayed nearby the virtual object. Or clicking the ticket discarding button to discard the current voting authority. In addition, while the voting button is displayed, a remark control may be displayed to add remark information to the clicked virtual object based on a touch operation for the remark control.
In addition, a remark list can be displayed in the second virtual scene, remark prompt information is displayed in the remark list, and remark information is added to the displayed target virtual object in response to triggering operation for the remark prompt information. For a specific implementation of this process, reference is made to the above-described embodiments.
Third, the present embodiment provides a control function of a game progress, in which, in an action phase, at least part of a first virtual scene and a first virtual object located in the first virtual scene in the action phase are displayed in a graphical user interface; acquiring skill configuration parameters of a first virtual object to determine additional skills newly added by the first virtual object on the basis of default skills of the role; the default skills are assigned skills according to the identity attribute of the first virtual object; when the completion progress of the virtual tasks in the game stage reaches a progress threshold, controlling the first virtual object to unlock the additional skills, and providing an additional skill control for triggering the additional skills on the basis of providing a default skill control for triggering the default skills in the graphical user interface; responding to a preset trigger event, and controlling the graphical user interface to display a second virtual scene corresponding to the discussion stage; the second virtual scene includes at least one of the following: the second virtual object, the role icon of the second virtual object, the first virtual object and the role icon of the first virtual object; the discussion phase is configured to determine a game state of the at least one second virtual object or the first virtual object based on a result of the discussion phase. For a specific implementation of this process, reference is made to the following examples.
In the embodiment of the application, the description is from the perspective of a first virtual object with a first character attribute. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which a first virtual object may move, may also play virtual tasks or perform other interactive operations. The user issues a movement operation for the target virtual object, controls the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position of a relative center of the first virtual scene range displayed in the graphical user interface. The virtual camera in the first virtual scene moves following the movement of the first virtual object, thereby causing the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
When a user controls a first virtual object to move in a first virtual scene, determining additional skills newly added by the first virtual object on the basis of the default skills of the role according to the skill parameters of the first virtual object, wherein the additional skills can comprise at least one of the following: the identity pair gambling skill, the identity skill verification, the guiding skill and the task doubling skill are determined, meanwhile, the progress of a virtual task which is completed together with a plurality of other virtual objects with the same role attribute (first role attribute) in the current game stage is determined, the progress is displayed according to the displayed progress bar, when the progress of completing the virtual task in the game stage is determined to reach a progress threshold value, the first virtual object can be controlled to unlock the additional skill, the first virtual object plays a game by utilizing the additional skill, for example, the guiding skill can be used in an action stage, the virtual object which is in a target state (such as death and the like) within a preset distance threshold value from the first virtual object in the first virtual scene is determined, the first virtual object is controlled to move to the position of the virtual object in the target state, and discussion is immediately initiated.
And after the preset trigger event is triggered, displaying a second virtual scene in the graphical user interface. For example, the triggering event may be a specific triggering operation, and any virtual object in the surviving state may perform the triggering operation, for example, as shown in fig. 6, by triggering the discussion control, a second virtual scene may be displayed in the graphical user interface, so as to implement the virtual scene switching from the first virtual scene to the second virtual scene, and all virtual objects in the game of authority move from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object and the object icon of the first virtual object, where the object icon may be an avatar, a name, and the like of the virtual object.
In the second virtual scenario, as shown in fig. 7, the virtual objects in the surviving state have the right to speak discussion and vote, and in the second virtual scenario, a plurality of virtual objects in the surviving state are included, including the first virtual object, the first virtual object can send discussion information through the click input space and the voice translation control on the right side, the discussion information sent by the virtual object can be displayed on the discussion information panel, and the discussion information can include who initiates the discussion, who is attacked, the location of the attacked virtual object, the location of each virtual object when the discussion is initiated, and the like.
The user can click a certain virtual object in the second virtual scene, and a voting button aiming at the virtual object can be displayed nearby the virtual object, so that the virtual object can be voted, before voting, the user can control the first virtual object to use the corresponding unlocked additional skills to check the virtual object with the key suspicion, for example, the first virtual object can use the identity verification skills to check the identity of the virtual object with the key suspicion, and according to the checking result, whether the virtual object is voted or not is determined, so that the voting accuracy is improved, and of course, the user can click the voting discarding button to discard the voting authority of the time.
Fourth, the present embodiment provides another display function of a virtual map. Responding to the moving operation, controlling the virtual character to move in the virtual scene, and displaying the virtual scene to which the virtual character moves currently in the graphical user interface;
in the present embodiment, the description is from the perspective of the virtual object controlled by the player. A virtual scene is provided in the graphical user interface, as shown in fig. 6, in which virtual scene (e.g., the first virtual scene shown in fig. 6), a virtual character controlled by the player (e.g., the first virtual character and/or the second virtual character shown in fig. 6) can move, can also perform game tasks, or perform other interactive operations. In response to a movement operation issued by the player, the virtual object is controlled to move in the virtual scene, and in most cases, the virtual object is located at a position relatively centered in the virtual scene range displayed in the graphical user interface. The virtual camera in the virtual scene moves along with the movement of the virtual object, so that the virtual scene displayed in the graphical user interface correspondingly changes along with the movement of the virtual object, and the virtual scene to which the virtual character moves currently is displayed in the graphical user interface.
The virtual objects participating in the game of the authorities are in the same virtual scene, so that if the virtual objects are close to other virtual objects in the moving process of the virtual objects, other virtual objects may enter the virtual scene range displayed in the graphical user interface, and the virtual objects are roles controlled by other players. As shown in fig. 7, a plurality of virtual objects are displayed in the virtual scene range. In addition, a movement control for controlling movement of the virtual object, a plurality of attack controls, and a discussion control are displayed in the graphical user interface, the discussion control being operable to control the virtual object into a second virtual scene as shown in fig. 7.
And responding to map display operation issued by a user, and displaying a first virtual map in an overlaid manner on the virtual scene displayed by the graphical user interface. For example, the player displays a first virtual map superimposed over the virtual scene for a touch operation of a scene thumbnail (scene map as shown in fig. 6); for another example, in response to a control operation that controls the virtual character to perform a second specific action, the first virtual map is displayed superimposed over the virtual scene; here, the first virtual map includes at least a current position of the first virtual character, a position of each first virtual area in the virtual scene, a position of the connected area, and the like. When the map switching condition is triggered, switching a first virtual map which is displayed on a virtual scene in a superimposed manner in the graphical user interface into a second virtual map corresponding to the virtual scene, wherein the transparency of at least part of the map area of the second virtual map is higher than that of the map area corresponding to the first virtual map, so that the information shielding degree of the virtual map after switching is lower than that before switching. For example, the map switching condition may be a specific trigger operation, and the virtual object in the surviving state may perform the trigger operation, for example, after responding to a control operation for controlling the virtual object to perform a first specific action, switch the first virtual map displayed superimposed on the virtual scene to a second virtual map corresponding to the virtual scene; for example, by triggering the map switching key, the first virtual map displayed superimposed on the virtual scene is switched to the second virtual map corresponding to the virtual scene.
When the map switching condition is triggered, the first virtual map can be switched to the second virtual map through a specific switching mode, for example, the first virtual map displayed on the virtual scene in a superimposed mode is replaced by the second virtual map corresponding to the virtual scene; or, according to the first change threshold value of the transparency, the first virtual map is adjusted to a non-visual state in the current virtual scene, and the first virtual map which is displayed in a superimposed manner on the virtual scene is replaced by a second virtual map corresponding to the virtual scene; or, removing the first virtual map overlapped and displayed on the virtual scene, and overlapping and displaying the second virtual map on the virtual scene according to the second change threshold value of the transparency; or adjusting the transparency of the first virtual map according to the third change threshold of the transparency, and simultaneously displaying the second virtual map in a superimposed manner in the virtual scene according to the fourth change threshold of the transparency until the first virtual map is in a non-visual state in the current virtual scene.
Fifth, the present embodiment provides a target attack function in a game. Controlling the first virtual object to move in the first virtual scene in response to the movement operation for the first virtual object, and controlling the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; and controlling the temporary virtual object to move from an initial position to a position where a target virtual object is located in a first virtual scene and executing specified operation on the target virtual object so that the target virtual object enters a target state, wherein the temporary virtual object is a virtual object controlled by the first virtual object with a target identity, the target identity is an identity attribute allocated at the beginning of a game, the target virtual object is a virtual object determined from a second virtual object in a plurality of survival states, the target state is a state that at least part of interaction modes configured by the target virtual object in the second virtual scene are limited to be used, the second virtual scene is a virtual scene displayed in a graphical user interface in response to a preset trigger event, and the second virtual scene comprises at least one second virtual object or object icon of the second virtual object.
In this embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 6, in which virtual objects can move, can also perform game tasks, or perform other interactive operations. The user issues a movement operation for the first virtual object, and controls the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position of a relative center of the first virtual scene range displayed in the graphical user interface. The virtual camera in the first virtual scene moves following the movement of the first virtual object, thereby causing the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The virtual objects participating in the game of the authority are in the same first virtual scene, so in the moving process of the first virtual object, if the first virtual object is close to other virtual objects, other virtual objects may enter the first virtual scene range displayed in the graphical user interface, and the virtual objects are roles controlled by other players. As shown in fig. 6, two nearby second virtual objects are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
The temporary virtual object is a virtual object controlled by a first virtual object with a target identity, the target identity is an identity attribute allocated at the beginning of a game, the target virtual object is a virtual object determined from second virtual objects in a plurality of survival states, the target state is a state that at least part of interaction modes configured by the target virtual object in a second virtual scene are limited to be used, the second virtual scene is a virtual scene displayed in a graphical user interface in response to a preset trigger event, and the second virtual scene comprises at least one second virtual object or role icon of the second virtual object.
In the initial state, the temporary virtual object is not controlled by the user, but under certain specific conditions, the first virtual object with the target identity itself or the user corresponding to the first virtual object with the target identity has the authority to control the temporary virtual object. The temporary virtual object can be controlled to move from the initial position to the position of the target virtual object in the first virtual scene, and the specified operation is executed on the target virtual object. The initial position may be a position where the temporary virtual object is located when not controlled, the designating operation may be an attack operation, and after the designating operation is performed on the target virtual object, a specific effect is generated on the target virtual object, that is, the target virtual object is brought into the target state.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in a plurality of survival states, and the second virtual objects in the plurality of survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily found by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position where the target virtual object is located in the first virtual scene, the designated operation is executed on the target virtual object, and then the target virtual object enters a target state.
And after the preset trigger event is triggered, displaying a second virtual scene in the graphical user interface. For example, the triggering event may be a specific triggering operation, and any virtual object in the surviving state may perform the triggering operation, for example, in fig. 6, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so as to implement the switching of the virtual scene from the first virtual scene to the second virtual scene, and all virtual objects in the game of authority move from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object or the object icon of the first virtual object, where the object icon may be an avatar, a name, or the like of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right of speaking discussion and voting, but the target virtual object enters a target state, so that at least part of interaction modes configured in the second virtual scene of the target virtual object are in a limited use state; the interaction modes can comprise speaking discussion interaction, voting interaction and the like; the limited use state may be that a certain interaction mode is not available, or that a certain interaction mode is not available for a certain period of time, or that the number of times of a certain interaction mode is limited to a specified number of times.
As shown in fig. 7, in the second virtual scenario, a plurality of virtual objects in a surviving state are included, including a first virtual object, the first virtual object may send discussion information through a click input space and a voice translation control on the right side, the discussion information sent by the virtual object may be displayed on a discussion information panel, and the discussion information may include who initiates a discussion, who is attacked, a location of the attacked virtual object, a location of each virtual object when initiating the discussion, and so on.
The user can vote on a certain virtual object in the second virtual scene by clicking the virtual object, and a voting button for the virtual object can be displayed nearby the virtual object. Or clicking the ticket discarding button to discard the current voting authority.
In the target attack method in the game, in the first virtual scene, the first virtual object with the target identity can control the temporary virtual object to execute the appointed operation on the target virtual object, and the first virtual object is not required to be controlled to directly execute the appointed operation on the target virtual object.
Sixth, the present embodiment provides an interactive data processing function in a game, which controls a first virtual object to move in a virtual scene in response to a touch operation for a movement control area, and controls a virtual scene range displayed by a graphical user interface to change according to the movement of the first virtual object; determining a response area of the first virtual object moving to a target virtual object in the virtual scene, wherein the target virtual object is a virtual object which is arranged in the virtual scene and can interact with the virtual object; and responding to a control instruction triggered by the touch operation, controlling the display state of the first virtual object to be switched into a stealth state, and displaying a mark for referring to the first virtual object in the area of the target virtual object.
The movement control area is used for controlling the virtual object to move in the virtual scene, and can be a virtual rocker, so that the movement direction of the virtual object can be controlled through the virtual rocker, and the movement speed of the virtual object can be controlled.
The virtual scene displayed in the graphical user interface is mainly obtained by shooting an image of a virtual scene range corresponding to the position where the virtual object is located by the virtual camera, and in the moving process of the virtual object, the virtual camera can be generally set to move along with the virtual object, and at the moment, the virtual scene range shot by the virtual camera also moves along with the virtual object.
Some virtual objects with interaction functions can be arranged in the virtual scene, the virtual objects can interact with the virtual objects, and the virtual objects can trigger interaction when being positioned in the response area of the virtual objects. At least one virtual object with an interactive function can be included in the virtual scene, and the target virtual object is any one of the at least one virtual object with an interactive function.
The range of the response area of the virtual object may be preset, for example, the range of the response area may be set according to the size of the virtual object, and the range of the response area may also be set according to the type of the virtual object, which may specifically be set according to the actual requirement. For example, the range of the response area for the virtual object of the carrier class may be set to be larger than the area in which the virtual object is located, and the range of the response area for the virtual object of the miscreant class prop may be set to be equal to the area in which the virtual object is located.
The control instruction triggered by the touch operation may be a specific operation for a specific area, or may be a specific operation for a specific object, for example, the control instruction may be triggered by a double-click operation for a target virtual object, and for example, an interactive control may be provided in the graphical user interface, and the control instruction may be triggered by a click operation for the interactive control. The interaction control may be provided after determining that the first virtual object moves to a response area of a target virtual object in the virtual scene. Based on this, the method may further comprise: controlling the graphical user interface to display an interactive control of the target virtual object; the control instruction triggered by the touch operation comprises a control instruction triggered by a touch interaction control.
According to the embodiment of the invention, after the player triggers the interaction with the virtual object, the display state of the virtual object can be controlled to be converted into the stealth display, the switching of the display state and the operation switching do not influence the game progress, the interaction with the player is increased, the interestingness is improved, and the user experience is improved.
In some embodiments, the target virtual object may be a virtual carrier, and the virtual carrier may be preset with a preset threshold value, where the preset threshold value is used to indicate the maximum number of bearers of the virtual carrier, that is, the maximum number of virtual objects that are hidden on the virtual carrier. Based on this, when it is determined that the virtual carrier is already fully loaded, a player stealth failure for a subsequent stealth switch may be indicated.
In some embodiments, in the inference-based game, two links may be included, which may be divided into two links, an action link and a voting link. All the virtual objects in the link that survive (players in the game) can act, such as can do tasks, can be tampered with, etc. The player can gather links for discussing and voting the reasoning results, for example, reasoning the identities of the virtual objects, wherein the tasks corresponding to the identities of different virtual objects can be different. In such games, skill may also be released in areas of the target virtual object to enable execution of tasks, or churning, etc. Based on this, after determining that the first virtual object moves to the response area of the target virtual object in the virtual scene, the method may further include: responding to a skill release instruction triggered by touch operation, and taking at least one virtual object stealth in the area of the target virtual object as an alternative virtual object; a contributing object is randomly determined as a skill release instruction in at least one candidate virtual object.
The virtual object of the skill release instruction triggered by the touch operation can be a role in a stealth state or a virtual object in a non-stealth state.
Seventh, the present embodiment provides a scene recording function in a game. Displaying a game interface on the graphical user interface, wherein the game interface comprises at least part of a first virtual scene in a first game task stage and a first virtual object in the first virtual scene; controlling a virtual scene range displayed in the game interface to change according to the moving operation in response to the moving operation for the first virtual object; responding to a recording instruction triggered at a first game task stage, and acquiring an image of a preset range of a current game interface; storing the image; and responding to a viewing instruction triggered in a second game task stage, and displaying images, wherein the second game task stage and the first game task stage are different task stages in a game where the first virtual object is currently located.
In this embodiment, the description is from the perspective of the first virtual object having the target identity. A first virtual scene is first provided in the graphical user interface, as shown in fig. 8-9, in which virtual objects may move, may also perform game tasks, or perform other interactive operations. The user issues a movement operation for the first virtual object, and controls the first virtual object to move in the first virtual scene, and in most cases, the first virtual object is located at a position of a relative center of the first virtual scene range displayed in the graphical user interface. The virtual camera in the first virtual scene moves following the movement of the first virtual object, thereby causing the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The virtual objects participating in the game of the authority are in the same first virtual scene, so in the moving process of the first virtual object, if the first virtual object is close to other virtual objects, other virtual objects may enter the first virtual scene range displayed in the graphical user interface, and the virtual objects are roles controlled by other players. As shown in fig. 8 to 9, two nearby second virtual objects are displayed in the first virtual scene range. In addition, a movement control for controlling the movement of the first virtual object, a plurality of attack controls and a discussion control are displayed in the graphical user interface, wherein the discussion control can be used for controlling the virtual object to enter the second virtual scene.
When the user controls the first virtual object to move in the first virtual scene, the target virtual object can be determined from the second virtual objects in a plurality of survival states, and the second virtual objects in the plurality of survival states can be understood as virtual objects in other survival states except the first virtual object in the current game. Specifically, the user may determine the target virtual object according to the position, behavior, and the like of each second virtual object, for example, select a virtual object that is relatively isolated and is not easily found by other virtual objects during attack as the target virtual object. After the target virtual object is determined, the target virtual object can be controlled to move from the initial position to the position where the target virtual object is located in the first virtual scene, the designated operation is executed on the target virtual object, and then the target virtual object enters a target state.
And after the preset trigger event is triggered, displaying a second virtual scene in the graphical user interface. For example, the triggering event may be a specific triggering operation, and any virtual object in the surviving state may perform the triggering operation, for example, in fig. 8-9, by triggering the discussion control, the second virtual scene may be displayed in the graphical user interface, so as to implement the switching of the virtual scene from the first virtual scene to the second virtual scene, and all virtual objects in the game of authority move from the first virtual scene to the second virtual scene. The second virtual scene includes at least one second virtual object or an object icon of the second virtual object in addition to the first virtual object or the object icon of the first virtual object, where the object icon may be an avatar, a name, or the like of the virtual object.
In the second virtual scene, the virtual object in the survival state has the right of speaking discussion and voting, but the target virtual object enters a target state, so that at least part of interaction modes configured in the second virtual scene of the target virtual object are in a limited use state; the interaction modes can comprise speaking discussion interaction, voting interaction and the like; the limited use state may be that a certain interaction mode is not available, or that a certain interaction mode is not available for a certain period of time, or that the number of times of a certain interaction mode is limited to a specified number of times.
As shown in fig. 10, in the second virtual scenario, a plurality of virtual objects in a surviving state are included, including a first virtual object, the first virtual object may send discussion information through a click input space and a voice translation control on the right side, the discussion information sent by the virtual object may be displayed on a discussion information panel, and the discussion information may include who initiates a discussion, who is attacked, a location of the attacked virtual object, a location of each virtual object when initiating the discussion, and so on.
The user can vote on a certain virtual object in the second virtual scene by clicking the virtual object, and a voting button for the virtual object can be displayed nearby the virtual object. Or clicking the ticket discarding button to discard the current voting authority.
And responding to the touch operation aiming at the functional control, displaying a position mark interface in the graphical user interface, and displaying the character identification of at least one second virtual object and/or first virtual object in the position mark interface according to the position mark information reported by the at least one second virtual object and/or first virtual object.
Eighth, the present embodiment provides a game operation function. Providing a graphical user interface through the terminal, wherein the graphical user interface comprises a virtual scene and a virtual object, the virtual scene comprises a plurality of transmission areas, and the plurality of transmission areas comprise a first transmission area and at least one second transmission area which corresponds to the first transmission area and has different scene positions. Responding to touch operation aiming at a movement control area, and controlling a virtual object to move in a virtual scene; determining that the virtual object moves to a first transmission area, and displaying a first group of direction controls corresponding to at least one second transmission area in a movement control area; and responding to a trigger instruction aiming at a target direction control in the first group of direction controls, and controlling to change the virtual scene which is displayed in the graphical user interface and comprises the first transmission area into the virtual scene which comprises a second transmission area corresponding to the target direction control.
Responding to touch operation aiming at a movement control area, and controlling a virtual object to move in a virtual scene; determining that the virtual object moves to a first transmission area, and displaying a first group of direction controls corresponding to at least one second transmission area in a movement control area; and responding to a trigger instruction aiming at a target direction control in the first group of direction controls, and controlling to change the virtual scene range which is displayed in the graphical user interface and comprises the first transmission area into the virtual scene range which comprises a second transmission area corresponding to the target direction control.
In this embodiment, the graphical user interface includes at least a part of a virtual scene and a virtual object, where the virtual scene includes a plurality of transmission areas, and the plurality of transmission areas includes a first transmission area and at least one second transmission area with a different scene position corresponding to the first transmission area, where the first transmission area may be a hidden area (such as a tunnel, etc., and the present application uses a tunnel as an example) entrance area, and the second transmission area may be a hidden area exit area.
The graphical user interface may include a movement control area, where the position of the movement control area on the graphical user interface may be set by user-defined according to actual requirements, for example, may be set in a thumb-touchable area of a player, such as a lower left side, a lower right side, etc. of the graphical user interface.
As shown in fig. 11, a user inputs a touch operation for a movement control area, controls a virtual object to move in a virtual scene, and if it is determined that the virtual object moves to a first transmission area, a first set of direction controls (a direction control 1 and a direction control 2) corresponding to at least one second transmission area are displayed in the movement control area, and the first set of direction controls is used for indicating a direction of a corresponding tunnel exit.
The user inputs a trigger instruction for the target direction control (direction control 1) in the first group of direction controls, so that the virtual scene range including the first transmission area displayed in the graphical user interface can be controlled to be changed into the virtual scene range including the second transmission area corresponding to the target direction control, that is, the virtual scene range including the second transmission area corresponding to the direction control 1 is currently displayed in the graphical user interface through the trigger instruction for the target direction control. For a specific implementation of this process, reference is made to the above-described embodiments.
Based on the same inventive concept, the embodiment of the present application further provides a virtual map display device corresponding to the virtual map display method, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the virtual map display method described in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a virtual map display device provided in an embodiment of the present application, and as shown in fig. 12, the virtual map display device provided in the embodiment of the present application provides a graphical user interface through a terminal device, and the virtual map display device 800 includes:
the first display control module 810 is configured to display at least a portion of the virtual scene and the first virtual object on the graphical user interface.
The movement control module 820 is configured to control the first virtual object to move in the first virtual scene in response to a movement operation for the first virtual object, and control the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object.
The second display control module 830 is configured to control, in response to a preset trigger event, switching a virtual scene displayed in the graphical user interface from the first virtual scene to a second virtual scene, where the second virtual scene includes at least one second virtual object.
The third display control module 840 is configured to display a position mark interface in the graphical user interface in response to a touch operation for the functional control, and display, in the position mark interface, a role identifier of at least one second virtual object and/or a role identifier of the first virtual object according to position mark information reported by at least one second virtual object and/or the first virtual object.
Preferably, the third display control module 840 is configured to: determining initial display positions of the character identifiers according to the position mark information, and determining final display positions of the character identifiers according to the distances between the initial display positions;
and displaying the character identification of the at least one second virtual object and/or the first virtual object according to the final display position.
Preferably, the position mark interface includes a virtual map corresponding to the virtual scene.
Preferably, the location mark information reported by the first virtual object is used for determining by the following ways: displaying a position reporting prompt identifier at a map position of the virtual map corresponding to the actual position of the first virtual object in the virtual scene; and generating position mark information of the first virtual object determined according to the position report prompt identifier in response to the position report trigger operation aiming at the virtual map, wherein the position mark information comprises the map position of the position report prompt identifier in the virtual map.
Preferably, the third display control module 840 is specifically configured to: responding to triggering operation of a position reporting control displayed on the virtual map, and determining the actual position of the first virtual object in the virtual scene as position mark information of the first virtual object; alternatively, in response to a position selection operation performed on the virtual map, the position selected on the virtual map is determined as the position-marker information of the first virtual object.
Preferably, the third display control module 840 is further configured to: and constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein the coordinate position in the two-dimensional coordinate grid has a corresponding relation with the map position in the virtual map.
The step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface comprises at least one step of: determining a first unoccupied target grid intersection point which is closest to a map position corresponding to an initial display position in a two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection point in a virtual map according to a corresponding relation to serve as a final display position; and determining a second target grid intersection point which is closest to the map position corresponding to the initial display position and is unoccupied in the two-dimensional coordinate grid in the direction of the map position corresponding to the actual position of the virtual object corresponding to the initial display position in the virtual scene, and determining the map position corresponding to the coordinate position of the second target grid intersection point in the virtual map according to the corresponding relation as the final display position.
Preferably, the character identifier occupies a corresponding display area in the position mark interface, and identity information for indicating the identity of the virtual object is displayed in the character identifier; in each character identifier displayed in the position mark interface, a preset gap is reserved between identity information displayed in the character identifiers of two adjacent virtual objects, and display areas occupied by the character identifiers of the two adjacent virtual objects are not overlapped or edges of the display areas are covered.
Preferably, the third display control module 840 is specifically configured to: and in response to detecting that the distance between any two adjacent initial display positions is smaller than the preset gap, adjusting at least one initial display position in any two adjacent initial display positions to determine a final display position of the character identifier.
Preferably, the third display control module 840 is specifically configured to: and displaying the character identifications at final display positions corresponding to the character identifications in the position mark interface, and connecting the character identifications of each virtual object with the initial display positions corresponding to the character identifications.
Preferably, the third display control module 840 is configured to display the character identification in the position-marker interface by at least one of: responding to the enlarged display skill triggering operation aiming at the position mark interface, and carrying out enlarged display on the target area in the virtual map and/or the character identification of each virtual object in the target area; reducing the display size of the character mark of each virtual object in the position mark interface; the representation of identity information displayed in the character identity for indicating the identity of the virtual object is changed.
Preferably, the third display control module 840 is configured to display the character identification in the position-marker interface by: displaying a strategy control on the position mark interface; responding to the triggering operation for the display strategy control, and determining the display mode for the character identification; the character identification of each virtual object is displayed in the determined display mode in the position mark interface.
Preferably, the display mode includes at least one of the following: determining the coverage relationship of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object; and determining the coverage relationship of the edges of the display area occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
The virtual map display device provided by the embodiment of the application comprises a first display control module, a mobile control module, a second display control module and a third display control module, wherein the first display control module displays at least part of virtual scenes and first virtual objects on a graphical user interface; the movement control module responds to the movement operation aiming at the first virtual object, controls the first virtual object to move in the first virtual scene, and controls the first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object; the second display control module responds to a preset trigger event and controls the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene, wherein the second virtual scene comprises at least one second virtual object; and the third display control module responds to the touch operation aiming at the functional control, a position mark interface is displayed in the graphical user interface, and the character identification of at least one second virtual object and/or first virtual object is displayed in the position mark interface according to the position mark information reported by the at least one second virtual object and/or first virtual object.
In this way, the embodiment of the application enables the position information of each player in the game to be transmitted in a high-efficiency and clear mode by uploading the position mark information of each player in the game, reduces the information statement of the player in the game discussion stage, effectively lightens the memory burden of the player, assists the player in carrying out faster reasoning judgment and discussion in the game discussion stage, and effectively improves the game efficiency of the player.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 13, the electronic device 900 includes a processor 910, a memory 920, and a bus 930.
The memory 920 stores machine-readable instructions executable by the processor 910, when the electronic device 900 is running, the processor 910 communicates with the memory 920 through the bus 930, and when the machine-readable instructions are executed by the processor 910, the steps of the virtual map display method in the method embodiment shown in fig. 1 may be executed, and the specific implementation may refer to the method embodiment and will not be described herein.
The embodiment of the present application further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the virtual map display method in the embodiment of the method shown in fig. 1 may be executed, and a specific implementation manner may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A virtual map display method characterized in that a graphical user interface on which at least a part of a virtual scene of an inference-like game and a first virtual object are displayed is provided through a terminal device, the virtual map display method comprising:
responding to the moving operation of the first virtual object, controlling the first virtual object to move in a first virtual scene corresponding to the action stage of the reasoning game and execute a corresponding virtual task, and controlling a first virtual scene range displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
responding to a preset trigger event, controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene corresponding to a discussion stage of the reasoning game, wherein the second virtual scene comprises at least one second virtual object, and a discussion function is provided in the discussion stage so as to discuss the behavior of the virtual object in an action stage;
in the discussion stage, in response to touch operation for a function control, a position mark interface is displayed in the graphical user interface, character identifications of the at least one second virtual object and/or the first virtual object are displayed in the position mark interface according to position mark information reported by the at least one second virtual object and/or the first virtual object, the character identifications are used for indicating identities of the corresponding virtual objects, and positions of each character identification displayed in the position mark interface are used for indicating positions of the virtual object reported by the character identifications in the first virtual scene.
2. The virtual map display method according to claim 1, wherein the step of displaying the character identification of the at least one second virtual object and/or the first virtual object according to the position mark information reported by the at least one second virtual object and/or the first virtual object comprises:
determining initial display positions of the character identifiers according to the position mark information, and determining final display positions of the character identifiers according to the distances between the initial display positions;
and displaying the character identification of the at least one second virtual object and/or the first virtual object according to the final display position.
3. The virtual map display method according to claim 1, wherein the position mark interface includes therein a virtual map corresponding to a virtual scene.
4. A virtual map display method according to claim 3, wherein the position mark information reported by the first virtual object is determined by:
displaying a position reporting prompt identifier at a map position of the virtual map corresponding to the actual position of the first virtual object in the virtual scene;
And generating position mark information of the first virtual object determined according to the position report prompt identifier in response to the position report trigger operation aiming at the virtual map, wherein the position mark information comprises the map position of the position report prompt identifier in the virtual map.
5. The virtual map display method according to claim 4, wherein the step of generating the position mark information of the first virtual object determined according to the position report hint identifier in response to a position report trigger operation for the virtual map includes:
determining the actual position of the first virtual object currently in the virtual scene as position mark information of the first virtual object in response to a trigger operation of a position report control displayed on the virtual map;
or, in response to a position selection operation performed on the virtual map, determining the position selected on the virtual map as position-marker information of the first virtual object.
6. The virtual map display method according to claim 2, characterized in that the virtual map display method further comprises: constructing a two-dimensional coordinate grid corresponding to the virtual map, wherein a corresponding relation exists between the coordinate position in the two-dimensional coordinate grid and the map position in the virtual map;
The step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identifier in the position mark interface comprises at least one step of:
determining a first unoccupied target grid intersection point which is closest to a map position corresponding to the initial display position in the two-dimensional coordinate grid, and determining a map position corresponding to the coordinate position of the first target grid intersection point in the virtual map according to the corresponding relation to serve as a final display position; and, a step of, in the first embodiment,
and determining a second target grid intersection point which is closest to the map position corresponding to the initial display position and is unoccupied in the two-dimensional coordinate grid in the direction of the map position corresponding to the actual position of the virtual object corresponding to the initial display position in the virtual scene, and determining the map position corresponding to the coordinate position of the second target grid intersection point in the virtual map according to the corresponding relation as a final display position.
7. The virtual map display method according to claim 2, wherein the character identifier occupies a corresponding display area in the position-marker interface, and identity information for indicating the identity of a virtual object is displayed in the character identifier;
And in each character identifier displayed in the position mark interface, a preset gap is reserved between identity information displayed in the character identifiers of two adjacent virtual objects, and display areas occupied by the character identifiers of the two adjacent virtual objects are not overlapped or edges of the display areas are covered.
8. The virtual map display method of claim 2, wherein the step of adjusting the initial display positions according to the distance between the initial display positions to determine the final display position of the character identity in the position-marking interface comprises:
and in response to detecting that the distance between any two adjacent initial display positions is smaller than a preset gap, adjusting at least one initial display position in the two adjacent initial display positions to determine a final display position of the character identifier.
9. The virtual map display method according to claim 2, wherein character identifications are displayed in the position-marker interface by:
and displaying the character identifications at final display positions corresponding to the character identifications in the position mark interface, and connecting the character identifications of each virtual object with the initial display positions corresponding to the character identifications.
10. The virtual map display method according to claim 1, wherein character identifications are displayed in the position-marker interface by at least one of:
responding to the enlarged display skill triggering operation aiming at the position mark interface, and carrying out enlarged display on a target area in the virtual map and/or character identifications of each virtual object in the target area;
reducing the display size of the character mark of each virtual object in the position mark interface;
changing the expression form of the identity information which is displayed in the character identifier and is used for indicating the identity of the virtual object.
11. The virtual map display method of claim 7, wherein a character identification is displayed in the position-marker interface by:
displaying a strategy control on the position mark interface;
responding to the triggering operation for the display strategy control, and determining a display mode for the character identification;
and displaying the character identification of each virtual object in the determined display mode in the position mark interface.
12. The virtual map display method of claim 11, wherein the display mode includes at least one of:
Determining the coverage relationship of the edges of the display areas occupied by the character identifications of two adjacent virtual objects according to the time sequence of reporting the position mark information by each virtual object;
and determining the coverage relationship of the edges of the display area occupied by the character identifications of the two adjacent virtual objects according to the identity type of each virtual object.
13. A virtual map display apparatus, characterized in that a graphical user interface is provided by a terminal device, the virtual map display apparatus comprising:
a first display control module for displaying at least part of a virtual scene and a first virtual object of an inference-like game on the graphical user interface;
the mobile control module is used for responding to the mobile operation of the first virtual object, controlling the first virtual object to move in a first virtual scene corresponding to the action stage of the reasoning game and execute a corresponding virtual task, and controlling the range of the first virtual scene displayed in the graphical user interface to correspondingly change according to the movement of the first virtual object;
the second display control module is used for responding to a preset trigger event, controlling the virtual scene displayed in the graphical user interface to be switched from the first virtual scene to a second virtual scene corresponding to a discussion stage of the reasoning game, wherein the second virtual scene comprises at least one second virtual object, and a discussion function is provided in the discussion stage so as to discuss the behavior of the virtual object in an action stage;
And the third display control module is used for responding to the touch operation aiming at the function control in the process of the discussion stage, displaying a position mark interface in the graphical user interface, displaying character identifications of the at least one second virtual object and/or the first virtual object according to the position mark information reported by the at least one second virtual object and/or the first virtual object in the position mark interface, wherein the character identifications are used for indicating the identities of the corresponding virtual objects, and the position of each character identification displayed in the position mark interface is used for indicating the position of the virtual object reported by the character identification in the first virtual scene.
14. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the virtual map display method of any one of claims 1 to 12.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the virtual map display method according to any of claims 1 to 12.
CN202110420230.3A 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium Active CN113101634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110420230.3A CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110420230.3A CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113101634A CN113101634A (en) 2021-07-13
CN113101634B true CN113101634B (en) 2024-02-02

Family

ID=76718479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110420230.3A Active CN113101634B (en) 2021-04-19 2021-04-19 Virtual map display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113101634B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499585A (en) * 2021-08-09 2021-10-15 网易(杭州)网络有限公司 In-game interaction method and device, electronic equipment and storage medium
CN113680065A (en) * 2021-08-19 2021-11-23 网易(杭州)网络有限公司 Map processing method and device in game
CN113769383A (en) * 2021-09-14 2021-12-10 网易(杭州)网络有限公司 Control method and device for virtual object in battle game and electronic equipment
CN114253646B (en) * 2021-11-30 2024-01-23 万翼科技有限公司 Digital sand table display and generation method, device and storage medium
CN116212361B (en) * 2021-12-06 2024-04-16 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN114860148B (en) * 2022-04-19 2024-01-16 北京字跳网络技术有限公司 Interaction method, device, computer equipment and storage medium
CN117138357A (en) * 2022-05-23 2023-12-01 腾讯科技(深圳)有限公司 Message processing method and device in virtual scene, electronic equipment and storage medium
CN115738257B (en) * 2022-12-23 2023-12-08 北京畅游时代数码技术有限公司 Game role display method, device, storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016064A1 (en) * 2006-07-31 2008-02-07 Camelot Co., Ltd. Game device, object display method in game device, and display program
CN109276887A (en) * 2018-09-21 2019-01-29 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of virtual objects
CN111530073A (en) * 2020-05-27 2020-08-14 网易(杭州)网络有限公司 Game map display control method, storage medium and electronic device
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112156455A (en) * 2020-10-14 2021-01-01 网易(杭州)网络有限公司 Game display method and device, electronic equipment and storage medium
CN112619143A (en) * 2020-12-23 2021-04-09 上海米哈游天命科技有限公司 Role identification display method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10525356B2 (en) * 2017-06-05 2020-01-07 Nintendo Co., Ltd. Storage medium, game apparatus, game system and game control method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016064A1 (en) * 2006-07-31 2008-02-07 Camelot Co., Ltd. Game device, object display method in game device, and display program
CN109276887A (en) * 2018-09-21 2019-01-29 腾讯科技(深圳)有限公司 Information display method, device, equipment and the storage medium of virtual objects
CN111530073A (en) * 2020-05-27 2020-08-14 网易(杭州)网络有限公司 Game map display control method, storage medium and electronic device
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112156455A (en) * 2020-10-14 2021-01-01 网易(杭州)网络有限公司 Game display method and device, electronic equipment and storage medium
CN112619143A (en) * 2020-12-23 2021-04-09 上海米哈游天命科技有限公司 Role identification display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113101634A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113101634B (en) Virtual map display method and device, electronic equipment and storage medium
US11439906B2 (en) Information prompting method and apparatus, storage medium, and electronic device
WO2022151946A1 (en) Virtual character control method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN113101637B (en) Method, device, equipment and storage medium for recording scenes in game
US20180028916A1 (en) Information processing method, terminal, and computer storage medium
US20240091645A1 (en) Skill range indication and adjustment in a virtual scene
CN111185004A (en) Game control display method, electronic device, and storage medium
WO2022057529A1 (en) Information prompting method and apparatus in virtual scene, electronic device, and storage medium
WO2021244322A1 (en) Method and apparatus for aiming at virtual object, device, and storage medium
WO2022222592A9 (en) Method and apparatus for displaying information of virtual object, electronic device, and storage medium
CN112619167A (en) Information processing method and device, computer equipment and medium
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113082718A (en) Game operation method, device, terminal and storage medium
US20220266139A1 (en) Information processing method and apparatus in virtual scene, device, medium, and program product
CN113101639A (en) Target attack method and device in game and electronic equipment
US20240165515A1 (en) Game interaction method and apparatus, electronic device, and storage medium
CN113101635A (en) Virtual map display method and device, electronic equipment and readable storage medium
US20240149162A1 (en) In-game information prompting method and apparatus, electronic device and storage medium
KR20210144786A (en) Method and apparatus, device, and storage medium for displaying a virtual environment picture
WO2024011785A1 (en) Information processing method and apparatus, and electronic device and readable storage medium
CN113058265B (en) Interaction method, device, equipment and storage medium between teams in virtual scene
CN113101644B (en) Game progress control method and device, electronic equipment and storage medium
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
CN115089968A (en) Operation guiding method and device in game, electronic equipment and storage medium
CN113101644A (en) Game process control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant