WO2019218815A1 - 虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质 - Google Patents

虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019218815A1
WO2019218815A1 PCT/CN2019/082200 CN2019082200W WO2019218815A1 WO 2019218815 A1 WO2019218815 A1 WO 2019218815A1 CN 2019082200 W CN2019082200 W CN 2019082200W WO 2019218815 A1 WO2019218815 A1 WO 2019218815A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
target
virtual
terminal
mark
Prior art date
Application number
PCT/CN2019/082200
Other languages
English (en)
French (fr)
Inventor
范又睿
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019218815A1 publication Critical patent/WO2019218815A1/zh
Priority to US16/926,257 priority Critical patent/US11376501B2/en
Priority to US17/831,375 priority patent/US11951395B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/204Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform the platform being a handheld device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/306Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying a marker associated to an object or location in the game field
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8023Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game the game being played by multiple players at a common site, e.g. in an arena, theatre, shopping mall using a large public display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present application relates to the field of computer application technologies, and in particular, to a method, an apparatus, a computer device, and a computer readable storage medium for displaying a marker element in a virtual scene.
  • the target virtual object in the virtual scene can be prompted through a specific interface.
  • a virtual object such as a game character
  • the telescope interface can be triggered to view the virtual scene from the perspective of the telescope.
  • the user can mark a virtual object (such as a building) by a shortcut operation in a scene picture of the virtual scene viewed from the perspective of the telescope.
  • the scene of the virtual scene is viewed from the perspective of the telescope.
  • the object displays a marker element (such as an arrow).
  • the arrow is also displayed synchronously corresponding to the building.
  • a method, apparatus, computer device, and computer readable storage medium for displaying a marker element in a virtual scene are provided.
  • a method for displaying a marker element in a virtual scene comprising:
  • mark indication information graphic data of a mark element, wherein the mark element is a graphic element for indicating a location of the target virtual object in the virtual scene to each user account in the same team;
  • the mark element is displayed at a specified position around the target virtual object.
  • a method for displaying a marker element in a virtual scene comprising:
  • a mark element is displayed at a specified position around the target virtual object, and the target virtual object is a terminal corresponding to the user account and marked in the virtual scene.
  • a virtual object for viewing each user account in the same team wherein the user account is an account that controls the current virtual object or other virtual objects of the same team in the virtual scenario; the markup element is used to Each user account in the team indicates a graphical element of the location of the target virtual object in the virtual scene.
  • a marker element display device in a virtual scene comprising:
  • An indication information obtaining module configured to acquire marking indication information for indicating a target virtual object, where the marking indication information is used to indicate a target virtual object;
  • the target virtual object is a terminal corresponding to the user account, and is marked in the virtual scene a virtual object viewed by each user account in the same team, wherein the user account is an account that controls the current virtual object or other virtual objects of the same team in the virtual scenario;
  • a graphic data obtaining module configured to acquire graphic data of the marking element according to the marking indication information, the marking element is configured to indicate, to each user account in the same team, the target virtual object in the virtual scene Graphical element of the location;
  • a rendering module configured to render the markup element according to the graphic data
  • a mark element display module configured to display the mark element at a specified position around the target virtual object in a display interface of the virtual scene.
  • a computer device comprising a processor and a memory, the memory storing at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, the code A set or set of instructions is loaded and executed by the processor to implement a markup element display method in the virtual scene described above.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, the code set or the instruction set being processed
  • the device loads and executes to implement the tag element display method in the above virtual scene.
  • 1a is an application scenario diagram of a method for displaying a marker element in a virtual scene according to an exemplary embodiment of the present application
  • FIG. 1b is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • FIG. 2 is a schematic diagram of a scene picture of a virtual scene provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of a marking element display process in a virtual scene provided by an exemplary embodiment of the present application
  • FIG. 4 is a schematic view showing the display of a marking element according to the embodiment shown in FIG. 3;
  • FIG. 5 is a schematic diagram showing another marking element involved in the embodiment shown in FIG. 3;
  • FIG. 6 is a flowchart of a method for displaying a marker element in a virtual scene according to an exemplary embodiment of the present application
  • FIG. 7 is a schematic diagram of a marking operation according to the embodiment shown in FIG. 6; FIG.
  • FIG. 8 is a schematic diagram of selection of a type of marker element according to the embodiment shown in FIG. 6; FIG.
  • FIG. 9 is a schematic diagram of a linear distance calculation according to the embodiment shown in FIG. 6; FIG.
  • FIG. 10 is a schematic diagram showing distance information display according to the embodiment shown in FIG. 6; FIG.
  • FIG. 11 is a schematic diagram showing another distance information display according to the embodiment shown in FIG. 6;
  • FIG. 11 is a schematic diagram showing another distance information display according to the embodiment shown in FIG. 6;
  • FIG. 12 is a schematic diagram showing still another distance information display according to the embodiment shown in FIG. 6; FIG.
  • FIG. 13 is a flowchart showing a display of a marker element according to an exemplary embodiment of the present application.
  • FIG. 14 is a block diagram showing the structure of a mark element display device in a virtual scene according to an exemplary embodiment of the present application.
  • FIG. 15 is a structural block diagram of a computer device according to an exemplary embodiment of the present application.
  • a virtual scene refers to a virtual scene environment generated by a computer. It can provide a virtual world of multimedia. The user can control the virtual objects in the virtual scene by operating the device or the operation interface, and observe the perspective of the virtual object.
  • Virtual objects such as objects, people, landscapes, etc. in a virtual scene, or interacting with virtual objects and virtual objects such as objects, people, landscapes, or other virtual objects in a virtual scene, for example, by operating a virtual soldier against a target enemy Attacks, etc.
  • a virtual scene is typically displayed by an application in a computer device such as a terminal based on hardware (such as a screen) in the terminal.
  • the terminal may be a mobile terminal such as a smartphone, a tablet or an e-book reader; or the terminal may be a personal computer device of a notebook computer or a stationary computer.
  • the method for displaying a mark element in a virtual scene provided by the present application can be applied to an application environment as shown in FIG. 1a.
  • the first terminal 102, the second terminal 104, and the server 106 communicate with each other through a network.
  • the first terminal 102 displays the first display interface, determines the virtual object corresponding to the marking operation as the target virtual object when receiving the marking operation, and then sends a marking request to the server 106, where the marking request includes the identifier of the target virtual object.
  • the server 106 detects whether the distance of the target virtual object from the current virtual object is within the visible distance of the current virtual object, and when detecting that the target virtual object is within the visible distance of the current virtual object, to the first terminal 102 and/or the second
  • the terminal 104 transmits flag indication information.
  • the first terminal 102 and/or the second terminal 104 acquires the graphic data of the marking element according to the marking indication information, renders the marking element according to the graphic data, and then specifies the positioning around the target virtual object in the display interface of the virtual scene.
  • the marker element is displayed at the location.
  • the first terminal 102 and/or the second terminal 104 further acquire distance information, which is used to indicate a distance between the target virtual object and the current virtual object, around the marking element in the display interface of the virtual scene.
  • the distance information is displayed at the specified location.
  • the second terminal 104 may be a terminal used by the first terminal 102 corresponding to a user of the user (such as a teammate).
  • the first terminal 102 and the second terminal 104 may be, but are not limited to, a mobile terminal such as a smart phone, a tablet computer, or an e-book reader; or may be a personal computer device of a notebook computer or a stationary computer.
  • the server 106 can be implemented by a separate server or a server cluster composed of a plurality of servers, respectively.
  • FIG. 1b is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • the terminal includes a main board 110, an external output/input device 120, a memory 130, an external interface 140, a capacitive touch system 150, and a power supply 160.
  • the processing element such as a processor and a controller is integrated in the main board 110.
  • the external output/input device 120 may include a display component such as a display screen, a sound playback component such as a speaker, a sound collection component such as a microphone, and various types of keys and the like.
  • Program code and data are stored in the memory 130.
  • the external interface 140 can include a headphone interface, a charging interface, a data interface, and the like.
  • the capacitive touch system 150 can be integrated in a display component or button of the external output/input device 120 for detecting a touch operation performed by the user on the display component or the button.
  • Power source 160 is used to power other various components in the terminal.
  • the processor in the main board 110 may generate a virtual scene by executing or calling program code and data stored in the memory, and display the generated virtual scene through the external output/input device 120.
  • the capacitive touch system 150 can detect the touch operation performed when the user interacts with the virtual scene.
  • the virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene.
  • a virtual scene is a three-dimensional virtual scene.
  • FIG. 2 a schematic diagram of a scene of a virtual scene provided by an exemplary embodiment of the present application is shown.
  • the scene screen 200 of the virtual scene includes a virtual object 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240.
  • the virtual object 210 may be a current virtual object corresponding to the user of the terminal, and the virtual object 240 may be a virtual object controlled by the user corresponding to the other terminal.
  • the user may interact with the virtual object 240 by controlling the virtual object 210, for example, controlling the virtual object 210.
  • the virtual object 240 is attacked.
  • the virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene
  • the environment screen of the three-dimensional virtual scene displayed in the scene screen 200 is an object observed by the angle of view of the virtual object 210, exemplary.
  • the environment image 220 of the displayed three-dimensional virtual scene is the earth 224, the sky 225, the horizon 223, the hill 221, and the factory 222.
  • the virtual object 210 can be moved instantly under the control of the user.
  • the user can control the virtual object 210 to move in the virtual scene through an input device such as a keyboard, a mouse, a gamepad, etc. (for example, to control the movement of the virtual object 210 by a keyboard and a mouse as an example, the user can pass the W, A, S in the keyboard, D four buttons control the virtual object to move back and forth, and control the direction of the virtual object 210 by the mouse; or, if the screen of the terminal supports the touch operation, and the scene screen 200 of the virtual scene contains the virtual control button, the user touches When the virtual control button is controlled, the virtual object 210 can move in the virtual scene to the direction of the touch point relative to the center of the virtual control button.
  • FIG. 3 shows a schematic diagram of a flag element display flow in a virtual scene provided by an exemplary embodiment of the present application.
  • the terminal that runs the application corresponding to the virtual scenario can display the markup element corresponding to the target virtual object in the virtual scenario by performing the following steps.
  • Step 31 Obtain marking indication information for indicating the target virtual object, where the target virtual object is a virtual object that is marked by the terminal corresponding to the user account in the virtual scene and is displayed to each user account in the same team.
  • the user account may be one of an account that controls the current virtual object and other virtual objects of the same team in the virtual scenario.
  • the current virtual object refers to a virtual object currently controlled by a user account that performs terminal login of the present solution.
  • the current virtual object may be a virtual soldier in a game scene and controlled by a user corresponding to the current terminal through the terminal.
  • the above target virtual object may be any virtual object allowed to be marked in the virtual scene.
  • the target virtual object may be a scene object in a virtual scene.
  • the scene object may be a ground, a wall, a building, a rock, or a tree; or the target virtual object may also be a virtual item in a virtual scene.
  • the virtual item may be a virtual item or a virtual vehicle.
  • the target object may also be a virtual object controlled by another player in the virtual scene.
  • the virtual character may be a virtual soldier controlled by a hostile or friendly player; or the target virtual object may also be a virtual object controlled by Artificial Intelligence (AI) in the virtual scene, for example, the virtual object controlled by the AI may be in a virtual scene.
  • Non-player control characters Non Player Character, NPC
  • monsters Non-player control characters
  • Step 32 Acquire graphic data of the marking element according to the marking indication information.
  • the markup element is a graphic element for indicating the location of the target virtual object in the virtual scene to each user account in the same team.
  • Step 33 Render a mark element according to the graphic data.
  • Step 34 In the display interface of the virtual scene, the marking element is displayed at a specified position around the target virtual object.
  • the display interface of the virtual scene may be used to display a screen when the virtual scene is viewed in a viewing direction corresponding to the current virtual object.
  • the above view angle may be a direction when the virtual object is observed by the camera model in the virtual environment.
  • the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model follows the position of the virtual object in the virtual environment and changes simultaneously, and the camera The model is always within the preset distance of the virtual object in the virtual environment.
  • the relative position of the camera model and the virtual object does not change during the automatic following process.
  • the above camera model is a three-dimensional model located around the virtual object in the virtual environment.
  • the camera model is located near the head of the virtual object or located at the virtual object.
  • Head when using the third person perspective, the camera model can be located behind the virtual object and bound to the virtual object, or can be located at a preset distance from the virtual object, through which the camera model can be viewed from different angles
  • the virtual object located in the three-dimensional virtual environment is observed.
  • the camera model is located behind the virtual object (such as the head and shoulders of the avatar). For example, in the virtual scene shown in FIG.
  • the scene screen 200 is a screen when the virtual scene is viewed from the third person perspective of the virtual object 210.
  • the camera model does not actually display in the three-dimensional virtual environment, that is, the camera model cannot be recognized in the three-dimensional virtual environment displayed by the user interface.
  • the camera model is located at an arbitrary distance from the virtual object as an example.
  • one virtual object corresponds to one camera model, and the camera model can be rotated with the virtual object as a rotation center, such as: a virtual object
  • the camera model not only has an angle of rotation during the rotation, but also has an offset in the displacement, and the distance between the camera model and the center of rotation remains unchanged during rotation. That is, the camera model is rotated on the surface of the sphere with the center of rotation as the center of the sphere, wherein any point of the virtual object may be the head of the virtual object, the torso, or any point around the virtual object, which is used in this embodiment of the present application.
  • the viewing direction of the camera model is a direction in which the perpendicular line on the plane of the spherical surface of the camera model points to the virtual object.
  • the camera model can also observe the virtual object at a preset angle in different directions of the virtual object.
  • the user account of the current virtual object or the user account of other virtual objects of the same team is controlled, and a target virtual object is marked in the virtual scene to each user account in the same team to view the same team.
  • the terminal corresponding to each user account in the display may display the marking element of the target virtual object around the target virtual object in the display interface of the virtual scene displayed by each, that is, in the solution shown in the present application, by the user himself Or the virtual object marked by the teammate, the mark element is directly displayed in the display interface of the virtual scene, and the user does not need to open a specific interface, so that the display of the mark element of the virtual object is more direct, and does not affect other users in the virtual scene.
  • the operation improves the display of the marked elements.
  • the terminal may render and display the mark element when the target virtual object is within the visible distance of the current virtual object.
  • the visible distance of the current virtual object may be a preset distance (such as 500 m) around the current virtual object. That is to say, as long as the distance between the target virtual object and the current virtual object is within 500 m, the marked element of the target virtual object can be displayed in the virtual scene generated by the terminal.
  • the preset distance may be a preset distance set by a developer or an operation and maintenance personnel.
  • FIG. 4 shows a schematic diagram of a display of a marker element according to an embodiment of the present application.
  • the virtual scene 40 displayed by the terminal includes the current virtual object 41 , the target virtual object 42 , the target virtual object 43 , and the target virtual object 44 , wherein the visual distance of the current virtual object is 500 m.
  • the distance between the target virtual object 42 and the target virtual object 43 and the current virtual object 41 is less than 500 m, and the distance between the target virtual object 44 and the current virtual object 41 is greater than 500 m, and the target virtual object 42 is blocked by the house 45.
  • the terminal may display the markup element 42a corresponding to the target virtual object 42, and display the markup element 43a corresponding to the target virtual object 43, and the corresponding target virtual object 44 does not display the markup element.
  • the mark element 42a of the occluded target virtual object 42 can be displayed by a highlight outline.
  • the visible distance of the current virtual object may be a preset distance (for example, 500 m) in an observable picture around the current virtual object. That is to say, when the distance between the target virtual object and the current virtual object is within 500 m, and the target virtual object can be directly observed in the display interface, the marked element of the target virtual object can be generated in the virtual terminal. Displayed in the scene.
  • a preset distance for example, 500 m
  • FIG. 5 shows a schematic diagram of another marking element involved in the embodiment of the present application.
  • the visible distance of the current virtual object is a 500 m distance that can be directly observed.
  • the virtual scene 50 displayed by the terminal includes the current virtual object 51 , the target virtual object 52 , the target virtual object 53 , and the target. a virtual object 54, wherein the distance between the target virtual object 52 and the target virtual object 53 and the current virtual object 51 is less than 500 m, and the distance between the target virtual object 54 and the current virtual object 51 is greater than 500 m, and the target virtual object 52 is The house 55 is occluded.
  • the terminal can display the mark element 53a corresponding to the target virtual object 53, and the corresponding target virtual object 52 and the target virtual object 54 do not display the mark element.
  • the terminal may display the marking element only for the virtual object that is within the visible distance of the current virtual object and the marked duration is less than the preset duration, when the duration of a virtual object is greater than a certain length.
  • the preset duration for example, 1 minute
  • the terminal can remove the marker element corresponding to the virtual object in the scene picture.
  • the terminal may further acquire distance information (the distance information is used to indicate a distance between the target virtual object and the current virtual object), and specify a position around the marking element in the foregoing display interface.
  • the distance information is displayed.
  • the terminal may display the display interface of the virtual scene and control the movement of the virtual object in the virtual scene, such as at least one of moving and rotating, by using the solution shown in FIG. 3;
  • the terminal displays a mark element at a specified position around the target virtual object in the display interface.
  • the terminal may also display distance information at a specified location around the marker element in the display interface.
  • the target virtual object may be a virtual object that controls a user token corresponding to a user account of the current virtual object, or the target virtual object may also be a user account that controls other virtual objects in the same team.
  • the virtual object marked by the corresponding user (ie teammate) in the virtual scene that is, when the user is in the team mode, the virtual object marked by the user in the virtual scene can be shared with the teammate's terminal for display of the markup element.
  • the server can synchronize the marked condition to the terminal corresponding to the other user account in the same team.
  • the server may receive the mark request including the identifier of the target virtual object, and determine that each user account in the same team corresponds to the target terminal in the terminal, and the target virtual object is in the virtual object controlled by the user account corresponding to the target terminal.
  • the server sends the marking indication information to the target terminal to instruct the target terminal to acquire the graphic data of the marking element, and renders the marking element according to the graphic data, and is located around the target virtual object in the display interface of the virtual scene.
  • the marker element is displayed at the specified location.
  • FIG. 6 is a flowchart of a method for displaying a mark element in a virtual scene provided by an exemplary embodiment of the present application.
  • a user mark is virtual.
  • the method for displaying a mark element in the virtual scene may include the following steps:
  • Step 601 The first terminal displays the first display interface.
  • the first display interface may display a screen when the virtual scene is viewed in a viewing direction direction corresponding to the first virtual object.
  • the first virtual object is a current virtual object corresponding to the first terminal.
  • Step 602 When receiving the marking operation, the first terminal determines the virtual object corresponding to the marking operation as the target virtual object.
  • the first terminal may display a directional star icon in the first display interface, where the directional star icon is used to indicate a direction in which the current virtual object corresponding to the first terminal is facing; when the marking operation is received, A terminal may determine the virtual object to which the sight star icon is aligned as the target virtual object.
  • FIG. 7 illustrates a schematic diagram of a marking operation according to an embodiment of the present application.
  • the virtual scene 70 displayed by the terminal includes a current virtual object 71 and a crosshair icon 72 corresponding to the current virtual object 71.
  • the crosshair icon 72 indicates the direction in which the current virtual object 71 is facing, in the shooting game scene.
  • the crosshair icon 72 can also indicate the direction in which the weapon held by the current virtual object 71 is aligned.
  • the perspective of the character can be adjusted to make the crosshair icon 72.
  • After aligning the ground perform a shortcut operation (ie, the above marking operation). For example, after pressing the shortcut Q key, the first terminal receives the marking operation of the user on the ground.
  • a shortcut operation ie, the above marking operation
  • the first terminal may further display a mark type selection interface when the mark operation is received, where the mark type selection interface includes at least two mark options, and each mark option corresponds to one mark Type; when receiving a selection operation performed in the tag type selection interface, determining a target tag type, which is a tag type of a tag option corresponding to the selection operation.
  • the marker may also select the type of the marker element of the target virtual object, for example, the color and shape of the marker element of the target virtual object may be selected.
  • FIG. 8 illustrates a schematic diagram of selection of a tag element type according to an embodiment of the present application.
  • the terminal after the user performs a marking operation (such as pressing the shortcut Q key), the terminal superimposes the mark type selection interface 81 on the virtual scene 80, and the mark type selection interface 81 includes a plurality of Marking options, each marking option corresponds to a type of a marking element, for example, in FIG.
  • the marking option included in the marking type selection interface 81 may include an option corresponding to a gun-shaped marking element, and a marking element corresponding to a grenoid shape.
  • the option and the corresponding option of the mark element of the dagger shape, etc. the user selects the mark element to be set by selecting an operation such as a mouse click, a touch click, or a shortcut key (such as a numeric key, or pressing the Q key while switching through the Tab key). type.
  • the terminal determines that the type corresponding to the selection operation is the shape type of the target virtual object (ie, the target mark type).
  • Step 603 The first terminal sends a marking request to the server, where the marking request includes an identifier of the target virtual object.
  • the first terminal may send a request for the identifier of the target virtual object to the server, where the identifier of the target virtual object may be a unique identification (ID) of the target virtual object in the virtual scenario, or The identifier of the target virtual object may also be the coordinate of the target virtual object currently in the virtual scene.
  • ID unique identification
  • the specific embodiment of the identifier of the target virtual object is not limited in this embodiment.
  • the first terminal may send the identifier including the target virtual object to the server, and the target tag type.
  • the tag request when the user corresponding to the first terminal selects the target tag type of the target virtual object when marking the virtual object, the first terminal may send the identifier including the target virtual object to the server, and the target tag type. The tag request.
  • Step 604 The server detects whether the distance of the target virtual object from the current virtual object is within the visible distance of the current virtual object.
  • the server may detect whether the distance of the target virtual object from the first virtual object (ie, the current control object corresponding to the first terminal) is within the visible distance of the first virtual object, and detect that the target virtual object is away from the second virtual object (ie, Whether the distance of the current control object corresponding to the first terminal is within the visible distance of the second virtual object.
  • the second terminal may be a terminal used by a friend (such as a teammate) of the user corresponding to the first terminal.
  • the server may first acquire the coordinates of the first virtual object in the virtual scene and the coordinates of the target virtual object in the virtual scene, and calculate the first virtual object and the target according to the coordinates of the two. The distance of the virtual object in the virtual scene. After calculating the distance between the first virtual object and the target virtual object in the virtual scene, the server further compares the distance between the first virtual object and the target virtual object in the virtual scene and the visible distance of the first virtual object, to It is detected whether the distance of the target virtual object from the first virtual object is within the visible distance of the first virtual object.
  • the server may detect whether the virtual object (ie, the second virtual object) currently controlled by the teammate of the first terminal belongs to the user, and if the second virtual object survives, the server also acquires the second virtual object that is surviving in the virtual scenario. The coordinates, and calculate the distance between the second virtual object and the target virtual object in the virtual scene. After calculating the distance between the second virtual object and the target virtual object in the virtual scene, the server further compares the distance between the second virtual object and the target virtual object in the virtual scene and the visible distance of the second virtual object, to It is detected whether the distance of the target virtual object from the second virtual object is within the visible distance of the second virtual object.
  • the server may detect whether the virtual object (ie, the second virtual object) currently controlled by the teammate of the first terminal belongs to the user, and if the second virtual object survives, the server also acquires the second virtual object that is surviving in the virtual scenario. The coordinates, and calculate the distance between the second virtual object and the target virtual object in the virtual scene. After calculating the distance between the
  • the distance between the target virtual object and the current virtual object of the first terminal or the second terminal may be a linear distance (also referred to as a spatial distance or a three-dimensional space distance) in the virtual scene.
  • a linear distance also referred to as a spatial distance or a three-dimensional space distance
  • FIG. 9 illustrates a schematic diagram of linear distance calculation according to an embodiment of the present application.
  • the coordinates of the current virtual object in the virtual scene are (x 1 , y 1 , z 1 )
  • the coordinates of the target virtual object in the virtual scene are (x 2 , y 2 , z 2 )
  • the distance between the current virtual object and the target virtual object can be expressed as:
  • the unit of the above d may be meters (m).
  • the server can correspond to the target terminal in the terminal from each user account in the same team.
  • the terminal corresponding to each user account in the same team includes the first terminal and the second terminal as an example.
  • the first terminal is a target terminal;
  • the second terminal is the target terminal; if the target virtual object is within the visible distance of the current virtual object corresponding to the first terminal And being within the visible distance of the current virtual object corresponding to the second terminal, the first terminal and the second terminal are both target terminals.
  • Step 605 When it is detected that the target virtual object is within the visible distance of the current virtual object, the server sends the tag indication information to the corresponding terminal, and the first terminal and/or the second terminal receives the tag indication information.
  • the flag indication information is used to indicate the target virtual object.
  • the server may send the tag indication information to the target terminal in the first terminal and the second terminal. For example, when it is detected that the distance of the target virtual object from the first virtual object is within the visible distance of the first virtual object, the server may send the first tag indication information to the first terminal; correspondingly, when the target virtual object distance is detected The distance of the second virtual object is within the visible distance of the second virtual object, and the server may send the second tag indication information to the second terminal.
  • the first terminal/second terminal ie, the target terminal
  • the server may send the marking indication information including the target marking type to the target terminal.
  • the server may also send the target tag type to the target terminal by using other indication information other than the tag indication information.
  • the server may only send the target tag type to the second terminal. That is, the second mark indication information includes the target mark type, and the first mark indication information may not include the target mark type.
  • Step 606 The first terminal and/or the second terminal acquire the graphic data of the marking element according to the marking indication information.
  • the markup element is a graphic element for indicating the location of the target virtual object in the virtual scene to each user account in the same team.
  • the graphic data of the tag element may be acquired according to the target tag type.
  • the first terminal and/or the second terminal also correspondingly acquire the gun shape.
  • Tag element For example, if the user corresponding to the first terminal selects the option corresponding to the marker element of the gun shape in the interface shown in FIG. 8, in this step, the first terminal and/or the second terminal also correspondingly acquire the gun shape. Tag element.
  • the marking indication information may further include object indication information, where the object indication information is used to indicate a virtual object controlled by the user account corresponding to the terminal (ie, the first terminal) that marks the target virtual object (ie, the first virtual
  • the first terminal/second terminal may acquire graphics data corresponding to the virtual object indicated by the object indication information.
  • each target terminal may display different marking elements for target virtual objects marked by different users in the same team.
  • target virtual objects of different user tags in the same team are virtualized.
  • the mark elements of the object can be distinguished by different colors, so that each user can quickly distinguish which teammate is marked by the target virtual object.
  • Step 607 The first terminal and/or the second terminal render the marking element according to the graphic data.
  • the first terminal and/or the second terminal may respectively perform rendering according to the acquired graphic data locally to obtain a marking element corresponding to the target virtual object.
  • Step 608 The first terminal and/or the second terminal display the marking element at a specified position around the target virtual object in a display interface of the virtual scene.
  • the mark element may be displayed in the virtual scene corresponding to the position of the target virtual object, for example, Above the target virtual object, and the target virtual object is displayed close to the target virtual object.
  • Step 609 The first terminal and/or the second terminal acquire distance information.
  • the distance information is used to indicate a distance between the target virtual object and the current virtual object.
  • the distance information may be sent by the server to the first terminal and/or the second terminal.
  • the server detects whether the target virtual object is within the visible distance of the current virtual object corresponding to the first terminal and the second terminal, it is required to calculate the target virtual object and the first terminal and the second terminal respectively.
  • the distance between the corresponding current virtual objects in step 605, when the server detects that the target virtual object is within the visible distance of the current virtual object of a certain terminal, the server sends the marking element display indication to the corresponding terminal.
  • the distance information between the target virtual object and the corresponding terminal, and the corresponding terminal may obtain the distance information from the mark element display indication.
  • the coordinates of the target virtual object in the virtual scene may also be acquired by themselves, and locally based on the coordinates of the current virtual object.
  • the coordinate of the target virtual object in the virtual scene calculates the distance information between the two.
  • Step 610 The first terminal and/or the second terminal display the distance information at a specified position around the mark element in the display interface of the virtual scene.
  • the first terminal displays the distance information between the first virtual object and the target virtual object in the corresponding display element in the first display interface.
  • the second terminal displays the second virtual object in the corresponding display element in the second display interface.
  • the second display interface may display a screen when the virtual scene is viewed in a viewing direction direction corresponding to the second virtual object.
  • the first terminal and/or the second terminal may display the distance information at a specified position around the mark element.
  • the first terminal and/or the second terminal may display the distance information in a document at a specified location around the marked element of the target virtual object.
  • the specified position around the marking element may be the left side, the right side, the upper side or the lower side of the marking element, and the like.
  • FIG. 10 illustrates a schematic diagram of distance information display according to an embodiment of the present application.
  • the virtual scene 100 displayed by the terminal includes a current virtual object 1001, and a mark element 1002 of the virtual object (ie, an inverted triangle icon in FIG. 10), and a display is displayed above the mark element 1002.
  • the numerical value text shown as 85 m in FIG. 10 of the distance between the virtual object corresponding to the marker element 1002 and the current virtual object 1001 is displayed in the display frame 1003.
  • the distance information when the terminal corresponding marker element displays the distance information, the distance information may be displayed in a graphic form at a specified position around the marker element in the scene screen of the virtual scene.
  • FIG. 11 shows another schematic diagram of distance information display according to an embodiment of the present application.
  • the virtual scene 110 displayed by the terminal includes a current virtual object 1101, and a mark element 1102 of the virtual object (ie, an inverted triangle icon in FIG. 11), and a right side of the mark element 1002 is also displayed.
  • the distance indication graphic 1103 is composed of one or more horizontal bars, and the distance indicating the number of horizontal bars in the graphic 1103 indicates the length of the distance between the corresponding virtual object and the current virtual object 1101, for example, the distance indication graphic The greater the number of bars in 1103, the longer the distance between the virtual object and the current virtual object 1101.
  • the distance information includes at least one of the following information: a linear distance between the target virtual object and the current virtual object; a horizontal distance between the target virtual object and the current virtual object; and the target The height difference between the virtual object and the current virtual object.
  • the distance between the target virtual object displayed by the terminal and the current virtual object may be a horizontal distance, a three-dimensional space distance, or a height difference. Alternatively, it may be any two or all three of the above three.
  • the distance value displayed in the display box 1003 may be a horizontal distance between the target virtual object and the current virtual object, or may be a three-dimensional space distance between the target virtual object and the current virtual object. .
  • FIG. 12 shows another schematic diagram of distance information display according to an embodiment of the present application.
  • the distance between the target virtual object displayed by the terminal and the current virtual object is a horizontal distance and a height difference.
  • the virtual scene 120 displayed by the terminal includes the current virtual object 1201 and the mark of the virtual object.
  • An element 1202 ie, an inverted triangle icon in FIG. 12
  • a display frame 1203 is displayed.
  • the horizontal distance between the virtual object corresponding to the mark element 1202 and the current virtual object 1201 is displayed in the display frame 1203.
  • the numerical value text 1203a (shown as 85m in FIG.
  • the numerical text 1203b of the height difference between the virtual object corresponding to the marker element 1202 and the current virtual object 1201 shown as +5m in FIG. 12, the + symbol indicates the corresponding marker element 1202
  • the height of the virtual object is higher than the height of the current virtual object 1201. Accordingly, if the height of the virtual object corresponding to the marker element 1202 is lower than the height of the current virtual object 1201, the symbol of the numeric text 1203b may be a - sign).
  • the display interface further includes a thumbnail map of the virtual scene; the first terminal and/or the second terminal also corresponding to the location of the target virtual object in the thumbnail map of the virtual scene, A marker icon of the target virtual object is displayed in the thumbnail map.
  • the virtual scene 100 displayed by the terminal further includes a thumbnail map 1004 including an icon 1004a corresponding to the current virtual object 1001 and a marker icon 1004b of the target virtual object.
  • the server may receive the first terminal and/or the second terminal (ie, the target terminal) that previously marked the indication information.
  • the corresponding markup element can be removed from the display interface of the virtual scene.
  • the target terminal may remove the marking element from the display interface of the virtual scene when the timing of the second timer reaches the second preset duration; wherein the second timing The timer is a timer that starts when the target terminal starts to display the marker element, and the timer duration is the second preset duration.
  • the first terminal and/or the second terminal receive the marking indication information sent by the server, and after displaying the marking element according to the marking indication information, start a timer to start timing, and the timing is reached. After a certain length of time (such as 2min), the display of the marked element is cancelled.
  • the timer that is initiated by the terminal may be initiated by the server to notify the terminal.
  • the marking indication information may carry the second preset duration. After receiving the marking indication information and displaying the marking element, the terminal according to the second preset The duration starts the timer.
  • the first preset duration or the second preset duration may be a duration preset by a developer or an operation and maintenance personnel in the system.
  • the first preset duration or the second preset duration may also be the duration set by the marker of the target virtual object.
  • a terminal that controls a user account of a current virtual object or a terminal that controls a user account of another virtual object of the same team and a target in the virtual scenario
  • the terminal corresponding to each user account in the same team can display the marking element of the target virtual object around the target virtual object in the display interface of the virtual scene displayed by the same team.
  • the virtual object marked by the user himself or the teammate has the markup element directly displayed in the display interface of the virtual scene, and does not require the user to open a specific interface, so that the markup element of the virtual object
  • the display is more direct, and does not affect other operations of the user in the virtual scene, thereby improving the display effect of the marked elements.
  • FIG. 13 a flowchart of displaying a marker element provided by an exemplary embodiment of the present application is shown.
  • the terminal of the player after using the marking function by any player in the game, the terminal of the player sends the marking object corresponding to the marking function (ie, the target virtual object) to the server.
  • the server also calculates a three-dimensional distance between the marked object and the friendly unit (ie, the game character of the player's teammate), determines whether the three-dimensional distance is within the visible distance of the friendly unit, and if so, synchronizes the marked element To the friendly unit, that is, the terminal corresponding to the friendly unit, the mark icon of the mark element of the mark object is displayed in the scene screen of the virtual scene and the map UI. Afterwards, the server periodically determines whether the duration of the marked element is not less than the maximum time of the mark.
  • the friendly unit ie, the game character of the player's teammate
  • the server instructs each terminal to cancel displaying the mark element, and if so, if the mark is judged The duration of the element is not less than the maximum time of the tag, allowing the terminal to continue to display the tag element.
  • FIG. 14 is a block diagram showing the structure of a marker element display device in a virtual scene, according to an exemplary embodiment.
  • the markup element display means in the virtual scene can be used in the terminal to perform all or part of the steps performed by the terminal in the method shown in the corresponding embodiment of FIG. 3 or FIG.
  • the marking element display device in the virtual scene may include:
  • the indication information obtaining module 1401 is configured to acquire marking indication information for indicating a target virtual object, where the marking indication information is used to indicate a target virtual object; the target virtual object is a terminal corresponding to the user account, and marking the virtual scene in the virtual scene a virtual object that is viewed by each user account in the same team, where the user account is an account that controls the current virtual object or other virtual objects of the same team in the virtual scenario;
  • the graphic data obtaining module 1402 is configured to acquire graphic data of the marking element according to the marking indication information, where the marking element is used to indicate to the respective user accounts in the same team that the target virtual object is in the virtual scene Graphic element of the location;
  • a rendering module 1403, configured to render the markup element according to the graphic data
  • the markup element display module 1404 is configured to display the markup element at a specified position around the target virtual object in a display interface of the virtual scene.
  • the device further includes:
  • a distance information acquiring module where the distance information is used to indicate a distance between the target virtual object and the current virtual object
  • a distance information display module for displaying the distance information at a specified position around the mark element in the display interface.
  • the distance information includes at least one of the following information:
  • the device further includes:
  • a mark icon display module configured to display a position of the target virtual object in the thumbnail map, and display a mark icon of the target virtual object in the thumbnail map.
  • the indication information obtaining module 1401 is specifically configured to: when the receiving server detects that the target virtual object is within a visible distance of the current virtual object, the marking indication information.
  • the device further includes:
  • a cross-hair icon display module configured to display a cross-hair icon in the display interface, where the cross-hair icon is used to indicate a direction in which the current virtual object is facing;
  • An object determining module configured to determine, when the user account is an account of the current virtual object in the virtual scenario, the virtual object aligned with the sight icon as the target virtual object when receiving the marking operation ;
  • a request sending module configured to send a marking request to the server, where the marking request includes an identifier of the target virtual object.
  • the device further includes:
  • An interface display module configured to display a mark type selection interface when the mark operation is received, where the mark type selection interface includes at least two mark options, and each of the mark options corresponds to one mark type;
  • a type determining module configured to determine a target tag type when the selection operation performed in the tag type selection interface, the target tag type being a tag type of a tag option corresponding to the selection operation;
  • the request sending module is specifically configured to send, to the server, an identifier that includes the target virtual object, and the marking request of the target tag type.
  • the marking indication information includes a target marking type
  • the graphic data obtaining module 1402 is specifically configured to acquire graphic data of the marking element according to the target mark type.
  • the tag indication information includes object indication information, where the object indication information is used to indicate a virtual object controlled by a user account corresponding to the terminal that marks the target virtual object;
  • the graphic data obtaining module 1402 is specifically configured to acquire the graphic data corresponding to the virtual object indicated by the object indication information.
  • the device further includes:
  • a canceling information receiving module configured to receive the mark canceling information, where the information sent by the server after the time duration of the first timer reaches a first preset duration; the first timer is in the The moment when the target virtual object is marked;
  • a first removal module for removing the marking element from the display interface.
  • the device further includes:
  • a second removing module configured to remove the marking element from a display interface of the virtual scene when a timing duration of the second timer reaches a second preset duration; the second timer is to start displaying The moment when the marker element is described is started.
  • FIG. 15 is a block diagram showing the structure of a computer device 1500, according to an exemplary embodiment.
  • the computer device 1500 can be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV) motion picture. Experts compress standard audio layers 4) players, laptops or desktops.
  • Computer device 1500 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, and the like.
  • computer device 1500 includes a processor 1501 and a memory 1502. Storing at least one instruction, at least one program, code set or instruction set in the memory, at least one instruction, at least one program, code set or instruction set being loaded and executed by the processor to complete the corresponding embodiment of FIG. 3 or FIG. 6
  • the markup element in the virtual scene shows all or part of the steps of the method.
  • the processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1501 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 1501 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 1501 may be integrated with a GPU (Graphics Processing Unit), and the GPU is responsible for rendering and rendering of the content that the display needs to display.
  • the processor 1501 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1502 can include one or more computer readable storage media, which can be non-transitory.
  • the memory 1502 may also include high speed random access memory, as well as non-volatile memory such as one or more magnetic disk storage devices, flash memory storage devices.
  • the non-transitory computer readable storage medium in the memory 1502 is configured to store at least one instruction for execution by the processor 1501 to implement the virtual scene provided by the method embodiment of the present application. The display method of the marker element in .
  • computer device 1500 also optionally includes a peripheral device interface 1503 and at least one peripheral device.
  • the processor 1501, the memory 1502, and the peripheral device interface 1503 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1503 via a bus, signal line, or circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1504, a touch display screen 1505, a camera 1506, an audio circuit 1507, a positioning component 1508, and a power source 1509.
  • computer device 1500 also includes one or more sensors 1510.
  • the one or more sensors 1510 include, but are not limited to, an acceleration sensor 1511, a gyro sensor 1512, a pressure sensor 1513, a fingerprint sensor 1514, an optical sensor 1515, and a proximity sensor 1516.
  • FIG. 15 does not constitute a limitation to computer device 1500, may include more or fewer components than illustrated, or may be combined with certain components, or with different component arrangements.
  • a non-transitory computer readable storage medium comprising instructions, such as a memory comprising at least one instruction, at least one program, a code set or a set of instructions, at least one instruction, at least one segment
  • the program, code set or instruction set may be executed by the processor to perform all or part of the steps of the mark element display method in the virtual scene shown in the corresponding embodiment of FIG. 3 or FIG. 6 above.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质,其中方法包括:获取用于指示目标虚拟物(43)的标记指示信息;根据标记指示信息获取标记元素(42a、43a)的图形数据;根据图形数据渲染得到标记元素(42a、43a);在虚拟场景(40)的显示界面中,位于目标虚拟物(43)周围的指定位置处显示标记元素(42a、43a)。

Description

虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2018年05月18日提交中国专利局,申请号为2018104803962,发明名称为“虚拟场景中的标记元素显示方法、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机应用技术领域,特别涉及一种虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质。
背景技术
在很多构建虚拟场景的应用程序(比如虚拟现实应用程序、三维地图程序、军事仿真程序、第一人称射击游戏、多人在线战术竞技游戏等)中,用户有对虚拟场景中的虚拟对象进行标记的需求。
在相关技术中,可以通过特定的界面对虚拟场景中目标虚拟物进行提示。比如,在某个虚拟场景中,当用户正在控制的虚拟对象(比如游戏人物)获取到虚拟道具“望远镜”时,可以触发打开望远镜界面,以望远镜的视角观察虚拟场景。并且,用户可以在以望远镜的视角观察虚拟场景的场景画面中通过快捷操作标记某一虚拟对象(比如某栋建筑物),此时,以望远镜的视角观察虚拟场景的场景画面中会对应该建筑物显示一个标记元素(比如一个箭头)。当用户后续再次打开望远镜界面,且以望远镜的视角观察虚拟场景的场景画面中存在该建筑物时,该箭头也对应该建筑物同步显示。
发明内容
根据本申请的各种实施例,提供一种虚拟场景中的标记元素显示方法、装置、 计算机设备及计算机可读存储介质。
一种虚拟场景中的标记元素显示方法,所述方法包括:
获取用于指示目标虚拟物的标记指示信息,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;
根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;
根据所述图形数据渲染得到所述标记元素;
在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
一种虚拟场景中的标记元素显示方法,所述方法包括:
显示虚拟场景的显示界面,所述显示界面用于显示以当前虚拟对象的视角方向观察所述虚拟场景时的画面;
控制所述虚拟对象在所述虚拟场景中运动,所述运动包括移动和转动中的至少一种;
当所述显示界面中存在目标虚拟物时,在显示界面中,位于所述目标虚拟物周围的指定位置处显示标记元素,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素。
一种虚拟场景中的标记元素显示装置,所述装置包括:
指示信息获取模块,用于获取用于指示目标虚拟物的标记指示信息,所述标记指示信息用于指示目标虚拟物;所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户 账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;
图形数据获取模块,用于根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;
渲染模块,用于根据所述图形数据渲染得到所述标记元素;
标记元素显示模块,用于在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述虚拟场景中的标记元素显示方法。
一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述虚拟场景中的标记元素示方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1a是本申请一个示例性实施例提供的一种虚拟场景中的标记元素显示方法的应用场景图;
图1b是本申请一个示例性的实施例提供的终端的结构示意图;
图2是本申请一个示例性实施例提供的虚拟场景的场景画面示意图;
图3是本申请一个示例性实施例提供的虚拟场景中的标记元素显示流程的 示意图;
图4是图3所示实施例涉及的一种标记元素的显示示意图;
图5是图3所示实施例涉及的另一种标记元素的显示示意图;
图6是本申请一个示例性实施例提供的一种虚拟场景中的标记元素显示方法的流程图;
图7是图6所示实施例涉及的一种标记操作示意图;
图8是图6所示实施例涉及的一种标记元素类型选择示意图;
图9是图6所示实施例涉及的一种直线距离计算示意图;
图10是图6所示实施例涉及的距离信息显示示意图;
图11是图6所示实施例涉及的另一种距离信息显示示意图;
图12是图6所示实施例涉及的又一种距离信息显示示意图;
图13是本申请一示例性实施例提供的标记元素显示流程图;
图14是本申请一示例性实施例提供的虚拟场景中的标记元素显示装置的结构方框图;
图15是本申请一示例性实施例提供的计算机设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
虚拟场景是指用计算机生成的一种虚拟的场景环境,它能够提供一个多媒体的虚拟世界,用户可通过操作设备或操作界面对虚拟场景中可操作的虚拟对象进行控制,以虚拟对象的视角观察虚拟场景中的物体、人物、风景等虚拟物,或通过虚拟对象和虚拟场景中的物体、人物、风景等虚拟物或者其它虚拟对象等进行互动,例如,通过操作一个虚拟士兵对目标敌军进行攻击等。
虚拟场景通常由终端等计算机设备中的应用程序生成基于终端中的硬件(如屏幕)进行展示。该终端可以是智能手机、平板电脑或者电子书阅读器等移 动终端;或者,该终端也可以是笔记本电脑或者固定式计算机的个人计算机设备。
本申请提供的虚拟场景中的标记元素显示方法,可以应用于如图1a所示的应用环境中。其中,第一终端102、第二终端104与服务器106之间通过网络进行通信。第一终端102显示第一显示界面,在接收到标记操作时将标记操作对应的虚拟物确定为目标虚拟物,然后向服务器106发送标记请求,该标记请求中包含该目标虚拟物的标识。服务器106检测目标虚拟物距离当前虚拟对象的距离是否处于当前虚拟对象的可视距离内,当检测出目标虚拟物处于当前虚拟对象的可视距离内时,向第一终端102和/或第二终端104发送标记指示信息。第一终端102和/或第二终端104根据标记指示信息获取标记元素的图形数据,根据上述图形数据渲染得到该标记元素,然后在该虚拟场景的显示界面中,位于该目标虚拟物周围的指定位置处显示该标记元素。此外,第一终端102和/或第二终端104还获取距离信息,该距离信息用于指示该目标虚拟物与该当前虚拟对象之间的距离,在该虚拟场景的显示界面中的标记元素周围的指定位置处显示该距离信息。
其中,第二终端104可以是第一终端102对应用户的友方用户(如队友)所使用的终端。第一终端102和第二终端104可以但不限于是智能手机、平板电脑或者电子书阅读器等移动终端;或者,也可以是笔记本电脑或者固定式计算机的个人计算机设备。服务器106分别可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
请参考图1b,其示出了本申请一个示例性的实施例提供的终端的结构示意图。如图1b所示,该终端包括主板110、外部输出/输入设备120、存储器130、外部接口140、电容触控系统150以及电源160。
其中,主板110中集成有处理器和控制器等处理元件。
外部输出/输入设备120可以包括显示组件(比如显示屏)、声音播放组件(比如扬声器)、声音采集组件(比如麦克风)以及各类按键等。
存储器130中存储有程序代码和数据。
外部接口140可以包括耳机接口、充电接口以及数据接口等。
电容触控系统150可以集成在外部输出/输入设备120的显示组件或者按键中,电容触控系统150用于检测用户在显示组件或者按键上执行的触控操作。
电源160用于对终端中的其它各个部件进行供电。
在本申请实施例中,主板110中的处理器可以通过执行或者调用存储器中存储的程序代码和数据生成虚拟场景,并将生成的虚拟场景通过外部输出/输入设备120进行展示。在展示虚拟场景的过程中,可以通过电容触控系统150检测用户与虚拟场景进行交互时执行的触控操作。
其中,虚拟场景可以是三维虚拟场景,或者,虚拟场景也可以是二维虚拟场景。以虚拟场景是三维虚拟场景为例,请参考图2,其示出了本申请一个示例性的实施例提供的虚拟场景的场景画面示意图。如图2所示,虚拟场景的场景画面200包括虚拟对象210、三维虚拟场景的环境画面220、以及虚拟对象240。其中,虚拟对象210可以是终端对应用户的当前虚拟对象,而虚拟对象240可以是其它终端对应用户控制的虚拟对象,用户可以通过控制虚拟对象210与虚拟对象240进行交互,比如,控制虚拟对象210对虚拟对象240进行攻击。
在图2中,虚拟对象210与虚拟对象240是在三维虚拟场景中的三维模型,在场景画面200中显示的三维虚拟场景的环境画面为虚拟对象210的视角所观察到的物体,示例性的,如图2所示,在虚拟对象210的视角观察下,显示的三维虚拟场景的环境画面220为大地224、天空225、地平线223、小山221以及厂房222。
虚拟对象210可以在用户的控制下即时移动。比如,用户可以通过键盘、鼠标、游戏手柄等输入设备控制虚拟对象210在虚拟场景中移动(例如,以通过键盘和鼠标控制虚拟对象210移动为例,用户可以通过键盘中的W、A、S、D四个按键控制虚拟对象前后左右移动,并通过鼠标控制虚拟对象210面向的方向);或者,若终端的屏幕支持触控操作,且虚拟场景的场景画面200中包含虚拟控制按钮,则用户触控该虚拟控制按钮时,虚拟对象210可以在虚拟场景中,向触控点相对于虚拟控制按钮的中心的方向移动。
请参考图3,其示出了本申请一个示例性的实施例提供的虚拟场景中的标记 元素显示流程的示意图。如图3所示,运行上述虚拟场景对应的应用程序的终端(比如上述图1b所示的终端),可以通过执行以下步骤来显示虚拟场景中目标虚拟物对应的标记元素。
步骤31,获取用于指示目标虚拟物的标记指示信息,该目标虚拟物是用户账号对应的终端在虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物。
其中,上述用户账号可以是在虚拟场景中控制当前虚拟对象的账号和同一队伍的其它虚拟对象的账号中的一种。
在本申请实施例中,当前虚拟对象是指执行本方案的终端登录的用户账号当前控制的虚拟对象。比如,以虚拟场景是某射击类游戏场景为例,该当前虚拟对象可以是处于游戏场景中,且由当前终端对应的用户通过该终端进行控制的虚拟士兵。
上述目标虚拟物可以是虚拟场景中允许被标记的任意虚拟物。比如,上述目标虚拟物可以是虚拟场景中的场景对象,例如,该场景对象可以是一处地面、一堵墙面、一栋建筑、一块岩石或者一株树木;或者,上述目标虚拟物也可以是虚拟场景中的虚拟道具,例如,该虚拟道具可以是虚拟装备或者虚拟载具等;或者,上述目标虚拟物也可以是虚拟场景中被其他玩家控制的虚拟对象,例如,该虚拟角色可以是敌对或友方玩家控制的虚拟士兵;或者,上述目标虚拟物也可以是虚拟场景中被人工智能(Artificial Intelligence,AI)控制的虚拟物,例如,该被AI控制的虚拟物可以是虚拟场景中的非玩家控制角色(Non Player Character,NPC)或者怪物等。
步骤32,根据标记指示信息获取标记元素的图形数据。
其中,该标记元素是用于向同一队伍中的各个用户账号指示目标虚拟物在虚拟场景中的位置的图形元素。
步骤33,根据该图形数据渲染得到标记元素。
步骤34,在该虚拟场景的显示界面中,位于目标虚拟物周围的指定位置处显示该标记元素。
其中,上述虚拟场景的显示界面,可以用于显示以当前虚拟对象对应的视角方向观察虚拟场景时的画面。其中,上述视角方向可以是在虚拟环境中通过摄像机模型对虚拟对象进行观察时的方向。
可选地,摄像机模型在虚拟环境中对虚拟对象进行自动跟随,即,当虚拟对象在虚拟环境中的位置发生改变时,摄像机模型跟随虚拟对象在虚拟环境中的位置同时发生改变,且该摄像机模型在虚拟环境中始终处于虚拟对象的预设距离范围内。可选地,在自动跟随过程中,摄像头模型和虚拟对象的相对位置不发生变化。
以上述虚拟场景是三维虚拟场景为例,上述的摄像机模型是在虚拟环境中位于虚拟对象周围的三维模型,当采用第一人称视角时,该摄像机模型位于虚拟对象的头部附近或者位于虚拟对象的头部,当采用第三人称视角时,该摄像机模型可以位于虚拟对象的后方并与虚拟对象进行绑定,也可以位于与虚拟对象相距预设距离的任意位置,通过该摄像机模型可以从不同角度对位于三维虚拟环境中的虚拟对象进行观察,可选地,该第三人称视角为第一人称的过肩视角时,摄像机模型位于虚拟对象(比如虚拟人物的头肩部)的后方。比如,在图2所示的虚拟场景中,场景画面200即为以虚拟对象210的第三人称视角观察虚拟场景时的画面。可选地,该摄像机模型在三维虚拟环境中不会进行实际显示,即,在用户界面显示的三维虚拟环境中无法识别到该摄像机模型。
对该摄像机模型位于与虚拟对象相距预设距离的任意位置为例进行说明,可选地,一个虚拟对象对应一个摄像机模型,该摄像机模型可以以虚拟对象为旋转中心进行旋转,如:以虚拟对象的任意一点为旋转中心对摄像机模型进行旋转,摄像机模型在旋转过程中的不仅在角度上有转动,还在位移上有偏移,旋转时摄像机模型与该旋转中心之间的距离保持不变,即,将摄像机模型在以该旋转中心作为球心的球体表面进行旋转,其中,虚拟对象的任意一点可以是虚拟对象的头部、躯干、或者虚拟对象周围的任意一点,本申请实施例对此不加以限定。可选地,摄像机模型在对虚拟对象进行观察时,该摄像机模型的视角方向为该摄像机模型所在球面的切面上的垂线指向虚拟对象的方向。
可选地,该摄像机模型还可以在虚拟对象的不同方向以预设的角度对虚拟对象进行观察。
通过图3所示的方案,控制当前虚拟对象的用户账号或者控制同一队伍的其它虚拟对象的用户账号,在虚拟场景中将一个目标虚拟物标记给同一队伍中的各个用户账号查看时,同一队伍中的各个用户账号对应的终端可以在各自展示的虚拟场景的显示界面中,在目标虚拟物周围显示该目标虚拟物的标记元素,也就是说,在本申请所示的方案中,被用户自己或者队友标记的虚拟物,其标记元素直接显示在虚拟场景的显示界面中,不需要用户打开特定的界面,使得虚拟对象的标记元素的显示更为直接,同时不影响用户在虚拟场景中的其它操作,从而提高了标记元素的显示效果。
在本申请实施例中,终端可以在上述目标虚拟物处于当前虚拟对象的可视距离内时,渲染并显示上述标记元素。
在一种可能的实现方式中,上述的当前虚拟对象的可视距离,可以是当前虚拟对象周围的,预先设置的某个距离(比如500m)。也就是说,只要目标虚拟物与当前虚拟对象之间的距离在500m以内,该目标虚拟物的标记元素即可以在终端生成的虚拟场景中显示。其中,上述预设的距离可以是开发人员或者运维人员预先设置的距离。
比如,请参考图4,其示出了本申请实施例涉及的一种标记元素的显示示意图。如图4所示,以当前虚拟对象的可视距离为500m为例,终端显示的虚拟场景40中包含有当前虚拟对象41,以及目标虚拟物42、目标虚拟物43以及目标虚拟物44,其中,目标虚拟物42和目标虚拟物43与当前虚拟对象41之间的距离小于500m,而目标虚拟物44与当前虚拟对象41之间的距离大于500m,且目标虚拟物42被房屋45遮挡,此时,终端可以对应目标虚拟物42显示标记元素42a,并对应目标虚拟物43显示标记元素43a,而对应目标虚拟物44则不显示标记元素。其中,被遮挡的目标虚拟物42的标记元素42a可以通过高亮轮廓进行显示。
或者,在另一种可能的实现方式中,上述的当前虚拟对象的可视距离,可以 是当前虚拟对象周围的可观察的画面中,预先设置的某个距离(比如500m)。也就是说,当目标虚拟物与当前虚拟对象之间的距离在500m以内,且该目标虚拟物可以在显示界面中被直接观察到时,该目标虚拟物的标记元素才可以在终端生成的虚拟场景中显示。
比如,请参考图5,其示出了本申请实施例涉及的另一种标记元素的显示示意图。如图5所示,以当前虚拟对象的可视距离为可直接观察的500m距离为例,终端显示的虚拟场景50中包含有当前虚拟对象51,以及目标虚拟物52、目标虚拟物53以及目标虚拟物54,其中,目标虚拟物52和目标虚拟物53与当前虚拟对象51之间的距离小于500m,而目标虚拟物54与当前虚拟对象51之间的距离大于500m,且目标虚拟物52被房屋55遮挡,此时,终端可以对应目标虚拟物53显示标记元素53a,而对应目标虚拟物52和目标虚拟物54则不显示标记元素。
在一种可能的实现方式中,终端可以只对处于当前虚拟对象的可视距离内,且被标记的时长小于预设时长的虚拟物显示标记元素,当一个虚拟物被标记的时长大于某一预设时长(比如1分钟)时,终端可以在场景画面中移除该虚拟物对应的标记元素。
可选的,在本公开实施例中,终端还可以获取距离信息(该距离信息用于指示目标虚拟物与当前虚拟对象之间的距离),并在上述显示界面中的标记元素周围的指定位置处显示该距离信息。
以终端界面显示内容的角度来说,通过图3所示的方案,终端可以显示虚拟场景的显示界面,并控制虚拟对象在虚拟场景中运动,如移动和转动中的至少一种;当显示界面中存在目标虚拟物时,该终端在显示界面中,位于目标虚拟物周围的指定位置处显示标记元素。可选的,终端还可以在显示界面中的标记元素周围的指定位置处显示距离信息。
上述图3所示的方案中,目标虚拟物可以是控制当前虚拟对象的用户账号对应的用户标记的虚拟对象,或者,上述目标虚拟物也可以是由控制同一队伍中的其它虚拟对象的用户账号对应的用户(即队友)在虚拟场景中标记的虚拟对象。 也就是所,当用户处于组队模式下时,用户在虚拟场景中标记的虚拟物可以共享给队友的终端进行标记元素的显示。
在一种可能的实现方式中,某个用户通过终端中标记了虚拟物之后,可以由服务器将标记的情况同步给同一队伍中的其他用户账号对应的终端。服务器在同步标记时,可以接收包含目标虚拟物的标识的标记请求,并确定同一队伍中的各个用户账号对应终端中的目标终端,该目标虚拟物处于目标终端对应的用户账号控制的虚拟对象的可视距离内;之后,服务器向目标终端发送标记指示信息,以指示目标终端获取标记元素的图形数据,根据图形数据渲染得到标记元素,并在虚拟场景的显示界面中,位于目标虚拟物周围的指定位置处显示该标记元素。
请参考图6,其示出了本申请一个示例性的实施例提供的一种虚拟场景中的标记元素显示方法的流程图,以三维虚拟场景中,在组队模式下,一个用户标记的虚拟对象同步给同队内各个用户对应的终端进行标记元素的显示为例,如图6所示,该虚拟场景中的标记元素显示方法可以包括如下几个步骤:
步骤601,第一终端显示第一显示界面。
其中,第一显示界面可以显示以第一虚拟对象对应的视角方向观察虚拟场景时的画面。该第一虚拟对象是第一终端对应的当前虚拟对象。
步骤602,第一终端接收到标记操作时,将标记操作对应的虚拟物确定为目标虚拟物。
在一种可能的实现方式中,第一终端可以在第一显示界面中显示准星图标,该准星图标用于指示第一终端对应的当前虚拟对象正对的方向;在接收到标记操作时,第一终端可以将准星图标对准的虚拟物确定为该目标虚拟物。
比如,请参考图7,其示出了本申请实施例涉及的一种标记操作示意图。如图7所示,终端显示的虚拟场景70中包含有当前虚拟对象71,以及当前虚拟对象71对应的准星图标72,该准星图标72指示当前虚拟对象71正对的方向,在射击游戏场景中,该准星图标72也可以指示当前虚拟对象71持有的武器所对准的方向,当用户想要标记某个虚拟对象,比如,标记某处地面时,可以调整角色的视角,使准星图标72对准该处地面后,执行某项快捷操作(即上述标记操 作),比如按下快捷Q键后,第一终端接收到用户对该处地面的标记操作。
在一种可能的实现方式中,第一终端还可以在接收到该标记操作时,展示标记类型选择界面,该标记类型选择界面中包含至少两个标记选项,每个该标记选项对应一种标记类型;接收到在该标记类型选择界面中执行的选择操作时,确定目标标记类型,该目标标记类型是该选择操作对应的标记选项的标记类型。
在本申请实施例中,标记者还可以选择目标虚拟物的标记元素的类型,比如,可以选择目标虚拟物的标记元素的颜色以及形状等。比如,请参考图8,其示出了本申请实施例涉及的一种标记元素类型选择示意图。如图8所示,终端显示的虚拟场景80,用户执行标记操作(比如按下快捷Q键)后,终端在虚拟场景80上层叠加显示标记类型选择界面81,该标记类型选择界面81中包含若干标记选项,每个标记选项对应一种标记元素的类型,比如,在图8中,标记类型选择界面81中包含的标记选项可以包括枪形状的标记元素对应的选项、手雷形状的标记元素对应的选项以及匕首形状的标记元素对应的选项等,用户通过选择操作,比如鼠标点击、触摸点击或者快捷键(比如数字键、或者按住Q键同时通过Tab键切换)等操作选择需要设置的标记元素的类型。终端接收到用户的选择操作后,确定该选择操作对应的类型为目标虚拟物的形状类型(即上述目标标记类型)。
步骤603,第一终端向服务器发送标记请求,该标记请求中包含该目标虚拟物的标识。
第一终端接收到标记操作后,可以向服务器发送包含目标虚拟物的标识的请求,其中,该目标虚拟物的标识可以是目标虚拟物在虚拟场景中的唯一身份标识(Identification,ID),或者,该目标虚拟物的标识也可以是该目标虚拟物当前在虚拟场景中的坐标。对于该目标虚拟物的标识的具体形式,本申请实施例不做限定。
可选的,当第一终端对应的用户在标记虚拟对象时,还选择了目标虚拟物的目标标记类型,则第一终端可以向该服务器发送包含该目标虚拟物的标识,以及该目标标记类型的该标记请求。
步骤604,服务器检测目标虚拟物距离当前虚拟对象的距离是否处于当前虚拟对象的可视距离内。
其中,服务器可以检测目标虚拟物距离第一虚拟对象(即第一终端对应的当前控制对象)的距离是否处于第一虚拟对象的可视距离内,并检测目标虚拟物距离第二虚拟对象(即第一终端对应的当前控制对象)的距离是否处于第二虚拟对象的可视距离内。
在本申请实施例中,第二终端可以是第一终端对应用户的友方用户(如队友)所使用的终端。服务器在接收到第一终端发送的标记请求时,可以先获取第一虚拟对象在虚拟场景中的坐标以及目标虚拟物在虚拟场景中的坐标,并根据两个的坐标计算第一虚拟对象与目标虚拟物在虚拟场景中的距离。在计算获得第一虚拟对象与目标虚拟物在虚拟场景中的距离后,服务器进一步将该第一虚拟对象与目标虚拟物在虚拟场景中的距离与第一虚拟对象的可视距离进行比较,以检测目标虚拟物距离第一虚拟对象的距离是否处于第一虚拟对象的可视距离内。
同时,服务器可以检测是否有第一终端对应用户的队友当前控制的虚拟对象(即第二虚拟对象)存活,若有第二虚拟对象存活,则服务器还获取存活的第二虚拟对象在虚拟场景中的坐标,并计算第二虚拟对象与目标虚拟物在虚拟场景中的距离。在计算获得第二虚拟对象与目标虚拟物在虚拟场景中的距离后,服务器进一步将该第二虚拟对象与目标虚拟物在虚拟场景中的距离与第二虚拟对象的可视距离进行比较,以检测目标虚拟物距离第二虚拟对象的距离是否处于第二虚拟对象的可视距离内。
其中,上述目标虚拟物与第一终端或者第二终端的当前虚拟对象之间的距离可以是虚拟场景中的直线距离(也可称为空间距离或者三维空间距离)。比如,请参考图9,其示出了本申请实施例涉及的一种直线距离计算示意图。如图9所示,假设当前虚拟对象在虚拟场景中的坐标为(x 1,y 1,z 1),目标虚拟物在虚拟场景中的坐标为(x 2,y 2,z 2),则在虚拟场景中,当前虚拟对象与目标虚拟物之间的距离可以表示为:
Figure PCTCN2019082200-appb-000001
其中,上述d的单位可以为米(m)。
通过上述步骤604,服务器可以从同一队伍中的各个用户账号对应终端中的目标终端。比如,以同一队伍中的各个用户账号对应的终端包括上述第一终端和第二终端为例,当目标虚拟物处于第一终端对应的当前虚拟对象的可视距离内时,该第一终端为目标终端;当目标虚拟物处于第二终端对应的当前虚拟对象的可视距离内时,该第二终端为目标终端;若目标虚拟物既处于第一终端对应的当前虚拟对象的可视距离内,也处于第二终端对应的当前虚拟对象的可视距离内,则第一终端和第二终端都是目标终端。
步骤605,当检测出目标虚拟物处于当前虚拟对象的可视距离内时,服务器向对应的终端发送标记指示信息,第一终端和/或第二终端接收该标记指示信息。
其中,该标记指示信息用于指示该目标虚拟物。
其中,服务器可以向第一终端和第二终端中的目标终端发送标记指示信息。比如,当检测出目标虚拟物距离第一虚拟对象的距离处于第一虚拟对象的可视距离内时,服务器可以向第一终端发送第一标记指示信息;相应的,当检测出目标虚拟物距离第二虚拟对象的距离处于第二虚拟对象的可视距离内,服务器可以向第二终端发送第二标记指示信息。相应的,第一终端/第二终端(即上述目标终端)接收服务器发送的上述标记指示信息。
可选的,当第一终端向该服务器发送的标记请求中包含该目标虚拟物的目标标记类型时,服务器可以向目标终端发送包含该目标标记类型的标记指示信息。
或者,服务器也可以通过标记指示信息之外的其它指示信息向目标终端发送该目标标记类型。
可选的,服务器可以只向第二终端发送目标标记类型。也就是说,上述第二标记指示信息中包含上述目标标记类型,而上述第一标记指示信息中可以不包含上述目标标记类型。
步骤606,第一终端和/或第二终端根据标记指示信息获取标记元素的图形数据。
其中,该标记元素是用于向同一队伍中的各个用户账号指示目标虚拟物在虚拟场景中的位置的图形元素。
可选的,当标记指示信息中包含目标标记类型时,第一终端/第二终端获取到标记指示信息后,即可以根据目标标记类型获取标记元素的图形数据。
比如,若第一终端对应的用户在图8中所示的界面中选择了枪形状的标记元素对应的选项,则在此步骤中,第一终端和/或第二终端也会对应获取枪形状的标记元素。
可选的,上述标记指示信息中还可以包含对象指示信息,该对象指示信息用于指示标记目标虚拟物的终端(即上述第一终端)对应的用户账号控制的虚拟对象(即上述第一虚拟对象);第一终端/第二终端可以获取与该对象指示信息所指示的虚拟对象相对应的图形数据。
在本申请实施例中,对于同一队伍中不同的用户标记的目标虚拟物,各个终端可以显示不同的标记元素,比如,在一种可能的实现方式中,同一队伍中不同的用户标记的目标虚拟物的标记元素,可以通过不同的颜色进行区分,以便各个用户能够快速分辨出该目标虚拟物是由哪一个队友标记的。
步骤607,第一终端和/或第二终端根据上述图形数据渲染得到该标记元素。
第一终端和/或第二终端可以分别在本地根据获取到的图形数据进行渲染,以获得目标虚拟物对应的标记元素。
步骤608,第一终端和/或第二终端在该虚拟场景的显示界面中,位于该目标虚拟物周围的指定位置处显示该标记元素。
在本申请实施例中,第一终端和/或第二终端渲染完成目标虚拟物对应的标记元素后,即可以在虚拟场景中,对应在目标虚拟物的位置显示该标记元素,比如,可以在目标虚拟物的上方,且贴近目标虚拟物显示该标记元素。
步骤609,第一终端和/或第二终端获取距离信息。
其中,该距离信息用于指示该目标虚拟物与该当前虚拟对象之间的距离。
在本申请实施例中,上述距离信息可以由服务器发送给第一终端和/或第二终端。比如,在上述步骤604中,服务器检测目标虚拟物是否处于第一终端和第 二终端各自对应的当前虚拟对象的可视距离内时,需要计算目标虚拟物分别与第一终端和第二终端各自对应的当前虚拟对象之间的距离,在步骤605中,当服务器检测出目标虚拟物处于某一终端的当前虚拟对象的可视距离内时,服务器向对应的终端发送标记元素显示指示中可以携带目标虚拟物与对应的终端之间的距离信息,对应的终端可以从标记元素显示指示中获取该距离信息。
或者,在另一种可能的实现方式中,第一终端和/或第二终端获取距离信息时,也可以自行获取目标虚拟物在虚拟场景中的坐标,并在本地根据当前虚拟对象的坐标与目标虚拟物在虚拟场景中的坐标计算两者之间的距离信息。
步骤610,第一终端和/或第二终端在该虚拟场景的显示界面中的标记元素周围的指定位置处显示该距离信息。
其中,第一终端在第一显示界面中对应标记元素显示第一虚拟对象与目标虚拟物之间的距离信息;相应的,第二终端在第二显示界面中对应标记元素显示第二虚拟对象与目标虚拟物之间的距离信息。其中,第二显示界面可以显示以第二虚拟对象对应的视角方向观察虚拟场景时的画面。
在本申请实施例中,第一终端和/或第二终端可以在标记元素周围的指定位置处显示该距离信息。比如,第一终端和/或第二终端可以在目标虚拟物的标记元素周围的指定位置,以文档显示该距离信息。
其中,上述标记元素周围的指定位置,可以是紧贴标记元素的左侧、右侧、上方或者下方等等。
比如,请参考图10,其示出了本申请实施例涉及的一种距离信息显示示意图。如图10所示,终端显示的虚拟场景100中包含有当前虚拟对象1001,以及虚拟对象的标记元素1002(即图10中的倒三角形图标),紧贴标记元素1002的上方还显示有一个显示框1003,该显示框1003中显示有标记元素1002对应的虚拟对象与当前虚拟对象1001的距离的数值文本(图10中显示为85m)。
在另一种可能的实现方式中,终端对应标记元素显示距离信息时,可以在该虚拟场景的场景画面中,在该标记元素周围的指定位置处以图形的形式显示该距离信息。
比如,请参考图11,其示出了本申请实施例涉及的另一种距离信息显示示意图。如图11所示,终端显示的虚拟场景110中包含有当前虚拟对象1101,以及虚拟对象的标记元素1102(即图11中的倒三角形图标),紧贴标记元素1002的右方还显示有一个距离指示图形1103,该距离指示图形1103由一至多条横杠组成,且该距离指示图形1103中的横杠的数量指示对应的虚拟对象与当前虚拟对象1101的距离的长短,比如,距离指示图形1103中的横杠的数量越多,表示虚拟对象与当前虚拟对象1101的距离越长。
可选的,该距离信息包括以下信息中的至少一种:该目标虚拟物与该当前虚拟对象之间的直线距离;该目标虚拟物与该当前虚拟对象之间的水平距离;以及,该目标虚拟物与该当前虚拟对象之间的高度差。
在本申请实施例中,当虚拟场景为三维虚拟场景时,终端显示的目标虚拟物与该当前虚拟对象之间的距离,可以是水平距离,也可以是三维空间距离,也可以是高度差,或者,也可以是上述三者中的任意两种或者全部三种。比如,在图10中,显示框1003中显示的距离数值可以是目标虚拟物与该当前虚拟对象之间的水平距离,或者,也可以是目标虚拟物与该当前虚拟对象之间的三维空间距离。
或者,请参考图12,其示出了本申请实施例涉及的又一种距离信息显示示意图。以终端显示的目标虚拟物与该当前虚拟对象之间的距离是水平距离和高度差为例,如图12所示,终端显示的虚拟场景120中包含有当前虚拟对象1201,以及虚拟对象的标记元素1202(即图12中的倒三角形图标),紧贴标记元素1202的上方还显示有一个显示框1203,该显示框1203中显示有标记元素1202对应的虚拟对象与当前虚拟对象1201的水平距离的数值文本1203a(图12中显示为85m),以及标记元素1202对应的虚拟对象与当前虚拟对象1201的高度差的数值文本1203b(图12中显示为+5m,+号表示标记元素1202对应的虚拟对象的高度高于当前虚拟对象1201的高度,相应的,如果标记元素1202对应的虚拟对象的高度低于当前虚拟对象1201的高度,则数值文本1203b的符号可以为-号)。
可选的,在本申请实施例中,显示界面中还包括虚拟场景的缩略地图;第一 终端和/或第二终端还对应该目标虚拟物在该虚拟场景的缩略地图中的位置,在该缩略地图中显示该目标虚拟物的标记图标。
比如,在图10中,终端显示的虚拟场景100中还包含有缩略地图1004,该缩略地图1004中包含当前虚拟对象1001对应的图标1004a,以及目标虚拟物的标记图标1004b。
可选的,在本申请实施例中,服务器可以在第一计时器的计时时长达到第一预设时长后,向之前接收了标记指示信息的第一终端和/或第二终端(即目标终端)发送标记取消信息;其中,该第一计时器是在目标虚拟物被标记的时刻启动,且计时时长为第一预设时长的计时器,第一终端和/或第二终端接收该标记取消信息后,即可以从该虚拟场景的显示界面中移除相应的标记元素。
或者,在另一种可能的实现方式中,目标终端可以在第二计时器的计时时长达到第二预设时长时,从虚拟场景的显示界面中移除该标记元素;其中,该第二计时器是目标终端开始显示该标记元素的时刻启动,且计时时长为第二预设时长的计时器。
比如,在本申请实施例中,第一终端和/或第二终端接收到服务器发送的标记指示信息,并根据标记指示信息显示标记元素后,即启动一个计时器开始计时,并在计时时长达到一定时长(比如2min)后,取消该标记元素的显示。
其中,终端启动的计时器可以由服务器通知终端启动,比如,上述标记指示信息中可以携带上述第二预设时长,终端接收到该标记指示信息并显示标记元素后,即根据该第二预设时长启动定时器。
上述第一预设时长或者第二预设时长可以是开发人员或者运维人员在系统中预先设置的时长。或者,上述第一预设时长或者第二预设时长也可以是目标虚拟物的标记者自行设置的时长。
综上所述,通过本申请实施例所示的方案,在虚拟场景中,控制当前虚拟对象的用户账号的终端或者控制同一队伍的其它虚拟对象的用户账号的终端,在虚拟场景中将一个目标虚拟物标记给同一队伍中的各个用户账号查看时,同一队伍中的各个用户账号对应的终端可以在各自展示的虚拟场景的显示界面中, 在目标虚拟物周围显示该目标虚拟物的标记元素,也就是说,在本申请所示的方案中,被用户自己或者队友标记的虚拟物,其标记元素直接显示在虚拟场景的显示界面中,不需要用户打开特定的界面,使得虚拟对象的标记元素的显示更为直接,同时不影响用户在虚拟场景中的其它操作,从而提高了标记元素的显示效果。
以上述图3或图6所示的方案应用于某一游戏场景中为例,请参考图13,其示出了本申请一示例性实施例提供的标记元素显示流程图。如图13所示,在某个射击类的竞技对抗游戏场景中,对局中的任意玩家使用标记功能后,该玩家的终端向服务器发送该标记功能对应的标记对象(即目标虚拟物)的标识,服务器根据该标记对象的标识,计算标记对象与该玩家的游戏角色之间的三维距离,判断该三维距离是否处于该玩家的游戏角色的可视距离内,若是,则通知该玩家的终端,在虚拟场景的场景画面和地图用户界面(User Interface,UI)中显示标记对象的标记元素的标记图标。同时,服务器还计算标记对象与友方单位(即该玩家的队友的游戏角色)之间的三维距离,判断该三维距离是否处于友方单位的可视距离内,若是,则将该标记元素同步给友方单位,即通知友方单位对应的终端,在虚拟场景的场景画面和地图UI中显示标记对象的标记元素的标记图标。之后,服务器周期性判断该标记元素的持续时间是否不小于标记最大时间,如果判断该标记元素的持续时间不小于标记最大时间,则服务器指示各个终端取消显示该标记元素,反之,如果判断该标记元素的持续时间不小于标记最大时间,则允许终端继续显示该标记元素。
应该理解的是,虽然图3、6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3、6中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交底地执行。
图14是根据一示例性实施例示出的一种虚拟场景中的标记元素显示装置的结构方框图。该虚拟场景中的标记元素显示装置可以用于终端中,以执行图3或图6对应实施例所示的方法中,由终端执行的全部或者部分步骤。该虚拟场景中的标记元素显示装置可以包括:
指示信息获取模块1401,用于获取用于指示目标虚拟物的标记指示信息,所述标记指示信息用于指示目标虚拟物;所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;
图形数据获取模块1402,用于根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;
渲染模块1403,用于根据所述图形数据渲染得到所述标记元素;
标记元素显示模块1404,用于在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
可选的,所述装置还包括:
距离信息获取模块,用于获取距离信息,所述距离信息用于指示所述目标虚拟物与所述当前虚拟对象之间的距离;
距离信息显示模块,用于在所述显示界面中的所述标记元素周围的指定位置处显示所述距离信息。
可选的,所述距离信息包括以下信息中的至少一种:
所述目标虚拟物与所述当前虚拟对象之间的直线距离;
所述目标虚拟物与所述当前虚拟对象之间的水平距离;
所述目标虚拟物与所述当前虚拟对象之间的高度差。
可选的,所述装置还包括:
标记图标显示模块,用于对应所述目标虚拟物在所述缩略地图中的位置,在所述缩略地图中显示所述目标虚拟物的标记图标。
可选的,所述指示信息获取模块1401,具体用于,接收服务器在检测到所述目标虚拟物处于所述当前虚拟对象的可视距离内时发送的所述标记指示信息。
可选的,所述装置还包括:
准星图标显示模块,用于在所述显示界面中显示准星图标,所述准星图标用于指示所述当前虚拟对象正对的方向;
对象确定模块,用于当所述用户账号是在所述虚拟场景中控制当前虚拟对象的账号时,在接收到标记操作时,将所述准星图标对准的虚拟物确定为所述目标虚拟物;
请求发送模块,用于向所述服务器发送标记请求,所述标记请求中包含所述目标虚拟物的标识。
可选的,所述装置还包括:
界面展示模块,用于在接收到所述标记操作时,展示标记类型选择界面,所述标记类型选择界面中包含至少两个标记选项,每个所述标记选项对应一种标记类型;
类型确定模块,用于接收到在所述标记类型选择界面中执行的选择操作时,确定目标标记类型,所述目标标记类型是所述选择操作对应的标记选项的标记类型;
所述请求发送模块,具体用于向所述服务器发送包含所述目标虚拟物的标识,以及所述目标标记类型的所述标记请求。
可选的,所述标记指示信息中包含目标标记类型;
所述图形数据获取模块1402,具体用于根据所述目标标记类型获取所述标记元素的图形数据。
可选的,所述标记指示信息中包含对象指示信息,所述对象指示信息用于指示标记所述目标虚拟物的终端对应的用户账号控制的虚拟对象;
所述图形数据获取模块1402,具体用于获取与所述对象指示信息所指示的虚拟对象相对应的所述图形数据。
可选的,所述装置还包括:
取消信息接收模块,用于接收标记取消信息,所述标记取消信息是所述服务器在第一计时器的计时时长达到第一预设时长后发送的信息;所述第一计时器是在所述目标虚拟物被标记的时刻启动;
第一移除模块,用于从所述显示界面中移除所述标记元素。
可选的,所述装置还包括:
第二移除模块,用于在第二计时器的计时时长达到第二预设时长时,从所述虚拟场景的显示界面中移除所述标记元素;所述第二计时器是开始显示所述标记元素的时刻启动。
图15是根据一示例性实施例示出的计算机设备1500的结构框图。该计算机设备1500可以是用户终端,比如智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。计算机设备1500还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1500包括有:处理器1501和存储器1502。存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以完成上述图3或图6对应实施例所示的虚拟场景中的标记元素显示方法的全部或者部分步骤。
其中,处理器1501可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1501可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1501也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1501可以在集成有GPU(Graphics Processing Unit,图像处理器), GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1501还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1502可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1502还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1502中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1501所执行以实现本申请中方法实施例提供的虚拟场景中的标记元素显示方法。
在一些实施例中,计算机设备1500还可选包括有:外围设备接口1503和至少一个外围设备。处理器1501、存储器1502和外围设备接口1503之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1503相连。具体地,外围设备包括:射频电路1504、触摸显示屏1505、摄像头1506、音频电路1507、定位组件1508和电源1509中的至少一种。
在一些实施例中,计算机设备1500还包括有一个或多个传感器1510。该一个或多个传感器1510包括但不限于:加速度传感器1511、陀螺仪传感器1512、压力传感器1513、指纹传感器1514、光学传感器1515以及接近传感器1516。
本领域技术人员可以理解,图15中示出的结构并不构成对计算机设备1500的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括至少一条指令、至少一段程序、代码集或指令集的存储器,上述至少一条指令、至少一段程序、代码集或指令集可由处理器执行以完成上述图3或图6对应实施例所示的虚拟场景中的标记元素显示方法的全部或者部分步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中, 本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (20)

  1. 一种虚拟场景中的标记元素显示方法,由终端执行,其特征在于,所述方法包括:
    获取用于指示目标虚拟物的标记指示信息,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象的账号和同一队伍的其它虚拟对象的账号中的一种;
    根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;
    根据所述图形数据渲染得到所述标记元素;
    在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取距离信息,所述距离信息用于指示所述目标虚拟物与所述当前虚拟对象之间的距离;
    在所述显示界面中的所述标记元素周围的指定位置处显示所述距离信息。
  3. 根据权利要求2所述的方法,其特征在于,所述距离信息包括以下信息中的至少一种:
    所述目标虚拟物与所述当前虚拟对象之间的直线距离;
    所述目标虚拟物与所述当前虚拟对象之间的水平距离;
    所述目标虚拟物与所述当前虚拟对象之间的高度差。
  4. 根据权利要求1所述的方法,其特征在于,所述显示界面中还包括所述虚拟场景的缩略地图;所述方法还包括:
    对应所述目标虚拟物在所述缩略地图中的位置,在所述缩略地图中显示所述目标虚拟物的标记图标。
  5. 根据权利要求1所述的方法,其特征在于,所述获取用于指示目标虚拟 物的标记指示信息,包括:
    接收服务器在检测到所述目标虚拟物处于所述当前虚拟对象的可视距离内时发送的所述标记指示信息。
  6. 根据权利要求5所述的方法,其特征在于,当所述用户账号是在所述虚拟场景中控制当前虚拟对象的账号时,所述方法还包括:
    在所述显示界面中显示准星图标,所述准星图标用于指示所述当前虚拟对象正对的方向;
    在接收到标记操作时,将所述准星图标对准的虚拟物确定为所述目标虚拟物;
    向所述服务器发送标记请求,所述标记请求中包含所述目标虚拟物的标识。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在接收到所述标记操作时,展示标记类型选择界面,所述标记类型选择界面中包含至少两个标记选项,每个所述标记选项对应一种标记类型;
    接收到在所述标记类型选择界面中执行的选择操作时,确定目标标记类型,所述目标标记类型是所述选择操作对应的标记选项的标记类型;
    所述向所述服务器发送标记请求,包括:
    向所述服务器发送包含所述目标虚拟物的标识,以及所述目标标记类型的所述标记请求。
  8. 根据权利要求5所述的方法,其特征在于,所述标记指示信息中包含目标标记类型;所述根据所述标记指示信息获取所述目标虚拟物的标记元素的图形数据,包括:
    根据所述目标标记类型获取所述标记元素的图形数据。
  9. 根据权利要求5所述的方法,其特征在于,所述标记指示信息中包含对象指示信息,所述对象指示信息用于指示标记所述目标虚拟物的终端对应的用户账号控制的虚拟对象;所述根据所述标记指示信息获取所述目标虚拟物的标记元素的图形数据,包括:
    获取与所述对象指示信息所指示的虚拟对象相对应的所述图形数据。
  10. 根据权利要求1至9任一所述的方法,其特征在于,所述方法还包括:
    接收标记取消信息,所述标记取消信息是所述服务器在第一计时器的计时时长达到第一预设时长后发送的信息;所述第一计时器是在所述目标虚拟物被标记的时刻启动;
    从所述虚拟场景的显示界面中移除所述标记元素。
  11. 根据权利要求1至9任一所述的方法,其特征在于,所述方法还包括:
    在第二计时器的计时时长达到第二预设时长时,从所述虚拟场景的显示界面中移除所述标记元素;所述第二计时器是开始显示所述标记元素的时刻启动。
  12. 一种虚拟场景中的标记元素显示方法,由终端执行,其特征在于,所述方法包括:
    显示虚拟场景的显示界面,所述显示界面用于显示以当前虚拟对象的视角方向观察所述虚拟场景时的画面;
    控制所述当前虚拟对象在所述虚拟场景中运动,所述运动包括移动和转动中的至少一种;
    当所述显示界面中存在目标虚拟物时,在显示界面中,位于所述目标虚拟物周围的指定位置处显示标记元素,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象的账号和同一队伍的其它虚拟对象的账号中的一种;所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    获取距离信息,所述距离信息用于指示所述目标虚拟物与所述当前虚拟对象之间的距离;
    在所述显示界面中的所述标记元素周围的指定位置处显示所述距离信息。
  14. 一种虚拟场景中的标记元素显示方法,由终端执行,其特征在于,所述方法包括:
    接收标记请求,所述标记请求中包含目标虚拟物的标识,所述目标虚拟物是 用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制虚拟对象的账号;
    确定所述同一队伍中的各个用户账号对应终端中的目标终端,所述目标虚拟物处于所述目标终端对应的用户账号控制的虚拟对象的可视距离内;
    向所述目标终端发送标记指示信息,所述标记指示信息用于指示所述目标终端获取标记元素的图形数据,根据所述图形数据渲染得到所述标记元素,并在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素。
  15. 根据权利要求14所述的方法,其特征在于,所述接收标记请求,包括:
    接收包含所述目标虚拟物的标识以及目标标记类型的所述标记请求;
    所述向所述目标终端发送标记指示信息,包括:
    向所述目标终端发送包含所述目标标记类型的所述标记指示信息。
  16. 一种虚拟场景中的标记元素显示装置,其特征在于,所述装置包括:
    指示信息获取模块,用于获取用于指示目标虚拟物的标记指示信息,所述标记指示信息用于指示目标虚拟物;所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;
    图形数据获取模块,用于根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;
    渲染模块,用于根据所述图形数据渲染得到所述标记元素;
    标记元素显示模块,用于在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
  17. 一种计算机设备,其特征在于,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行时, 使得所述处理器执行以下步骤:获取用于指示目标虚拟物的标记指示信息,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;根据所述图形数据渲染得到所述标记元素;在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
  18. 根据权利要求17所述的计算机设备,其特征在于,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行时,还使得所述处理器执行:获取距离信息,所述距离信息用于指示所述目标虚拟物与所述当前虚拟对象之间的距离;在所述显示界面中的所述标记元素周围的指定位置处显示所述距离信息。
  19. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行时,使得所述处理器执行:获取用于指示目标虚拟物的标记指示信息,所述目标虚拟物是用户账号对应的终端在所述虚拟场景中标记给同一队伍中的各个用户账号进行查看的虚拟物,所述用户账号是在所述虚拟场景中控制当前虚拟对象或同一队伍的其它虚拟对象的账号;根据所述标记指示信息获取标记元素的图形数据,所述标记元素是用于向所述同一队伍中的各个用户账号指示所述目标虚拟物在所述虚拟场景中的位置的图形元素;根据所述图形数据渲染得到所述标记元素;在所述虚拟场景的显示界面中,位于所述目标虚拟物周围的指定位置处显示所述标记元素。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行时,还使得所述处理器执行:获取距离信息,所述距离信息用于指示所述目标虚拟物与所述当前虚拟对象之间的距离;在所述显示界面中的所述标记元素周围的指定 位置处显示所述距离信息。
PCT/CN2019/082200 2018-05-18 2019-04-11 虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质 WO2019218815A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/926,257 US11376501B2 (en) 2018-05-18 2020-07-10 Method and apparatus for displaying marker element in virtual scene, computer device, and computer-readable storage medium
US17/831,375 US11951395B2 (en) 2018-05-18 2022-06-02 Method and apparatus for displaying marker element in virtual scene, computer device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810480396.2A CN108671543A (zh) 2018-05-18 2018-05-18 虚拟场景中的标记元素显示方法、计算机设备及存储介质
CN201810480396.2 2018-05-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/926,257 Continuation US11376501B2 (en) 2018-05-18 2020-07-10 Method and apparatus for displaying marker element in virtual scene, computer device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2019218815A1 true WO2019218815A1 (zh) 2019-11-21

Family

ID=63806932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082200 WO2019218815A1 (zh) 2018-05-18 2019-04-11 虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质

Country Status (3)

Country Link
US (2) US11376501B2 (zh)
CN (1) CN108671543A (zh)
WO (1) WO2019218815A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332417A (zh) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 一种多人场景交互的方法、设备、存储介质及程序产品
CN115268751A (zh) * 2022-03-17 2022-11-01 绍兴埃瓦科技有限公司 一种基于虚拟显示平面的操控方法与设备

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108671543A (zh) 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 虚拟场景中的标记元素显示方法、计算机设备及存储介质
CN108744512A (zh) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 信息提示方法和装置、存储介质及电子装置
CN109847353A (zh) * 2019-03-20 2019-06-07 网易(杭州)网络有限公司 游戏应用的显示控制方法、装置、设备及存储介质
US11270367B2 (en) * 2019-04-19 2022-03-08 Apple Inc. Product comparison techniques using augmented reality
CN110075519B (zh) * 2019-05-06 2022-09-30 网易(杭州)网络有限公司 虚拟现实中的信息处理方法及装置、存储介质及电子设备
CN110124310B (zh) * 2019-05-20 2022-10-04 网易(杭州)网络有限公司 游戏中的虚拟道具信息分享方法、装置、设备
CN110115838B (zh) * 2019-05-30 2021-10-29 腾讯科技(深圳)有限公司 虚拟环境中生成标记信息的方法、装置、设备及存储介质
CN110270098B (zh) * 2019-06-21 2023-06-23 腾讯科技(深圳)有限公司 控制虚拟对象对虚拟物品进行标记的方法、装置及介质
CN110738738B (zh) * 2019-10-15 2023-03-10 腾讯科技(深圳)有限公司 三维虚拟场景中的虚拟对象标记方法、设备及存储介质
CN110807826B (zh) * 2019-10-30 2023-04-07 腾讯科技(深圳)有限公司 虚拟场景中的地图显示方法、装置、设备及存储介质
CN111013145A (zh) * 2019-12-18 2020-04-17 北京智明星通科技股份有限公司 一种团战游戏中游戏对象标记方法、装置和服务器
CN111589113B (zh) * 2020-04-28 2021-12-31 腾讯科技(深圳)有限公司 虚拟标记的显示方法、装置、设备及存储介质
CN111821691A (zh) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 界面显示方法、装置、终端及存储介质
CN111773705B (zh) * 2020-08-06 2024-06-04 网易(杭州)网络有限公司 一种游戏场景中的互动方法和装置
CN112099713B (zh) * 2020-09-18 2022-02-01 腾讯科技(深圳)有限公司 一种虚拟元素的展示方法以及相关装置
CN112380315B (zh) * 2020-12-04 2022-06-28 久瓴(江苏)数字智能科技有限公司 数字供应链考察方法、装置、存储介质及计算机设备
CN112612387B (zh) * 2020-12-18 2022-07-12 腾讯科技(深圳)有限公司 展示信息的方法、装置、设备及存储介质
CN113181645A (zh) * 2021-05-28 2021-07-30 腾讯科技(成都)有限公司 特效显示方法、装置、电子设备及存储介质
CN113209617A (zh) * 2021-06-10 2021-08-06 腾讯科技(深圳)有限公司 虚拟对象的标记方法及装置
CN113769386A (zh) * 2021-09-17 2021-12-10 网易(杭州)网络有限公司 游戏中虚拟对象的显示方法、装置以及电子终端
US11797175B2 (en) 2021-11-04 2023-10-24 Microsoft Technology Licensing, Llc Intelligent keyboard attachment for mixed reality input
CN116983625A (zh) * 2022-09-26 2023-11-03 腾讯科技(成都)有限公司 基于社交场景的消息显示方法、装置、设备、介质及产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132931A1 (en) * 2007-11-15 2009-05-21 International Business Machines Corporation Method, device and program for automatically generating reference mark in virtual shared space
US20150177950A1 (en) * 2013-12-16 2015-06-25 Tencent Technology (Shenzhen) Company Limited Method and device for adding indicative icon in interactive application
CN107029425A (zh) * 2016-02-04 2017-08-11 网易(杭州)网络有限公司 一种射击游戏的操控系统、方法及终端
CN107376339A (zh) * 2017-07-18 2017-11-24 网易(杭州)网络有限公司 在游戏中锁定目标的交互方法及装置
CN107694086A (zh) * 2017-10-13 2018-02-16 网易(杭州)网络有限公司 游戏系统的信息处理方法及装置、存储介质、电子设备
CN108671543A (zh) * 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 虚拟场景中的标记元素显示方法、计算机设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8277316B2 (en) * 2006-09-14 2012-10-02 Nintendo Co., Ltd. Method and apparatus for using a common pointing input to control 3D viewpoint and object targeting
JP5507893B2 (ja) * 2009-05-29 2014-05-28 株式会社バンダイナムコゲームス プログラム、情報記憶媒体及び画像生成システム
US9251318B2 (en) * 2009-09-03 2016-02-02 International Business Machines Corporation System and method for the designation of items in a virtual universe
CN106453638B (zh) * 2016-11-24 2018-07-06 腾讯科技(深圳)有限公司 一种应用业务内信息交互方法及系统
CN107789837B (zh) * 2017-09-12 2021-05-11 网易(杭州)网络有限公司 信息处理方法、装置和计算机可读存储介质
US10807001B2 (en) * 2017-09-12 2020-10-20 Netease (Hangzhou) Network Co., Ltd. Information processing method, apparatus and computer readable storage medium
CN107812384B (zh) * 2017-09-12 2018-12-21 网易(杭州)网络有限公司 信息处理方法、装置和计算机可读存储介质
CN107899241B (zh) * 2017-11-22 2020-05-22 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132931A1 (en) * 2007-11-15 2009-05-21 International Business Machines Corporation Method, device and program for automatically generating reference mark in virtual shared space
US20150177950A1 (en) * 2013-12-16 2015-06-25 Tencent Technology (Shenzhen) Company Limited Method and device for adding indicative icon in interactive application
CN107029425A (zh) * 2016-02-04 2017-08-11 网易(杭州)网络有限公司 一种射击游戏的操控系统、方法及终端
CN107376339A (zh) * 2017-07-18 2017-11-24 网易(杭州)网络有限公司 在游戏中锁定目标的交互方法及装置
CN107694086A (zh) * 2017-10-13 2018-02-16 网易(杭州)网络有限公司 游戏系统的信息处理方法及装置、存储介质、电子设备
CN108671543A (zh) * 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 虚拟场景中的标记元素显示方法、计算机设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332417A (zh) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 一种多人场景交互的方法、设备、存储介质及程序产品
CN115268751A (zh) * 2022-03-17 2022-11-01 绍兴埃瓦科技有限公司 一种基于虚拟显示平面的操控方法与设备

Also Published As

Publication number Publication date
CN108671543A (zh) 2018-10-19
US11376501B2 (en) 2022-07-05
US20220297007A1 (en) 2022-09-22
US20200338449A1 (en) 2020-10-29
US11951395B2 (en) 2024-04-09

Similar Documents

Publication Publication Date Title
WO2019218815A1 (zh) 虚拟场景中的标记元素显示方法、装置、计算机设备及计算机可读存储介质
US11703993B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
WO2019205838A1 (zh) 虚拟场景中的距离信息显示方法、终端及计算机设备
US11269581B2 (en) Private virtual object handling
JP2022517194A (ja) 仮想環境においてマーク情報を生成する方法、装置、電子機器及びコンピュータプログラム
WO2019205881A1 (zh) 虚拟环境中的信息显示方法、装置、设备及存储介质
CN110917616B (zh) 虚拟场景中的方位提示方法、装置、设备及存储介质
US20230019749A1 (en) Object prompting method, apparatus, and device in virtual scene, and storage medium
CN108664231B (zh) 2.5维虚拟环境的显示方法、装置、设备及存储介质
JP2024509064A (ja) 位置マークの表示方法及び装置、機器並びにコンピュータプログラム
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
WO2022237076A1 (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN113289336A (zh) 在虚拟环境中标记物品的方法、装置、设备及介质
KR102587645B1 (ko) 터치스크린 제스처를 사용하여 정밀 포지셔닝하기 위한 시스템 및 방법
US11865449B2 (en) Virtual object control method, apparatus, device, and computer-readable storage medium
JP2019103815A (ja) ゲームプログラム、方法および情報処理装置
CN113769397A (zh) 虚拟物体的设置方法、装置、设备、介质及程序产品
JP2019103616A (ja) ゲームプログラム、方法および情報処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19803848

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19803848

Country of ref document: EP

Kind code of ref document: A1