CN111760288B - Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene - Google Patents

Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene Download PDF

Info

Publication number
CN111760288B
CN111760288B CN202010525049.4A CN202010525049A CN111760288B CN 111760288 B CN111760288 B CN 111760288B CN 202010525049 A CN202010525049 A CN 202010525049A CN 111760288 B CN111760288 B CN 111760288B
Authority
CN
China
Prior art keywords
virtual
spatial
space
orientation
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010525049.4A
Other languages
Chinese (zh)
Other versions
CN111760288A (en
Inventor
刘同云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Shanghai Network Co ltd
Original Assignee
Netease Shanghai Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Shanghai Network Co ltd filed Critical Netease Shanghai Network Co ltd
Priority to CN202010525049.4A priority Critical patent/CN111760288B/en
Publication of CN111760288A publication Critical patent/CN111760288A/en
Application granted granted Critical
Publication of CN111760288B publication Critical patent/CN111760288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a terminal and a storage medium for displaying a direction in a virtual three-dimensional scene; the embodiment of the invention can acquire the space three-dimensional position of the target object relative to the virtual role controlled by the user in the virtual three-dimensional scene, wherein the space three-dimensional position can comprise a horizontal position and a vertical position; displaying a first spatial orientation indicator, wherein the first spatial orientation indicator may include a first spatial orientation identification that may be used to indicate at least a horizontal position; and when the specified trigger instruction trigger is detected, displaying a second spatial orientation identifier, wherein the second spatial orientation identifier can be at least used for indicating the vertical position. The first space azimuth mark and the second space azimuth mark provided by the embodiment of the invention can accurately mark the three-dimensional azimuth of the target object relative to the virtual character, so that the accuracy of the azimuth display method in the virtual three-dimensional scene can be improved.

Description

Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a terminal, and a storage medium for displaying a direction in a virtual three-dimensional scene.
Background
The User Interface (UI) is a medium for human-computer interaction and information exchange between the system and the User, and the UI can display the system information in a human acceptable form, so that the User can conveniently and effectively operate the computer to achieve bidirectional human-computer interaction.
In virtual electronic games and simulation scenarios, the UI often needs to mark the location of a virtual object therein in order to guide the user to find the virtual object. For example, referring to FIG. 1a, in a role-playing class electronic game, a player may open a map UI and determine the relative position of the player himself to a particular game play object in a game scene based on the current position icon of the player himself on the map UI and the position icon of the particular game play object.
However, since the current display screen displays the scene in the virtual three-dimensional space in two-dimensional image frames, when the current UI displays the three-dimensional position of the virtual object in the virtual three-dimensional scene, a part of three-dimensional position information is often lost, so that the displayed azimuth is inaccurate, and therefore, the current method for indicating the azimuth of the target object in the frame of the virtual three-dimensional scene has low accuracy.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a terminal and a storage medium for displaying a direction in a virtual three-dimensional scene, which can improve the accuracy of the direction display method in the virtual three-dimensional scene.
The embodiment of the invention provides a method for displaying a direction in a virtual three-dimensional scene, which comprises the following steps:
acquiring a spatial three-dimensional position of a target object in a virtual three-dimensional scene relative to a virtual role controlled by a user, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position;
displaying a first spatial orientation indicator, the first spatial orientation indicator comprising a first spatial orientation identification, the first spatial orientation identification being used at least to indicate the horizontal position;
and when the specified trigger instruction is detected to trigger, displaying a second spatial orientation mark, wherein the second spatial orientation mark is at least used for indicating the vertical position. The embodiment of the invention also provides a direction display device in the virtual three-dimensional scene, which comprises:
the virtual three-dimensional control system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring the spatial three-dimensional position of a target object in a virtual three-dimensional scene relative to a virtual role controlled by a user, and the spatial three-dimensional position comprises a horizontal position and a vertical position;
a first display unit configured to display a first spatial orientation indicator, the first spatial orientation indicator including a first spatial orientation identifier, the first spatial orientation identifier being at least used to indicate the horizontal position;
And the second display unit is used for displaying a second spatial orientation mark when the specified trigger instruction is detected to trigger, and the second spatial orientation mark is at least used for indicating the vertical position. In some embodiments, the second display unit includes a second indicator subunit for displaying a second spatial orientation indicator including a second spatial orientation identification.
In some embodiments, the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user comprises a relative spatial vector between the target object and the virtual character, the second indicator subunit comprising:
the mapping sub-module is used for mapping the relative space vector into a preset space coordinate system to obtain a space coordinate corresponding to the end point of the relative space vector in the preset space coordinate system;
the generation sub-module is used for generating a space orientation identifier corresponding to the target object on a preset second space orientation indicator based on the space coordinates to obtain a second space orientation indicator marking the space orientation of the target object;
The acquisition unit includes:
the three-dimensional position sub-module is used for acquiring the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position;
and the vector sub-module is used for determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene.
In some embodiments, the three-dimensional location sub-module is further to:
determining a scene area name corresponding to the spatial three-dimensional position of the target object in the virtual three-dimensional scene according to the spatial three-dimensional position of the target object in the virtual three-dimensional scene;
and displaying the scene area name.
In some embodiments, the second display unit is configured to:
displaying an azimuth scale;
acquiring the visual field orientation of the virtual character controlled by the user;
and controlling the second spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, so that the second spatial orientation indicator indicates the horizontal position of the target object relative to the control virtual character in the virtual three-dimensional scene on the orientation scale.
In some embodiments, the second spatial orientation indicator comprises a plurality of second spatial indicator regions, the second display unit further configured to:
determining a second spatial indicator region to which the second spatial orientation indication identification belongs on the second spatial orientation indicator as a visual enhancement region;
modifying color parameters of the visual enhancement region.
In some embodiments, the first spatial orientation indicator comprises an orientation wheel;
the first display unit is used for determining the position of the first space azimuth mark in the azimuth wheel disc according to the horizontal position.
In some embodiments, the horizontal position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user comprises a relative horizontal vector, and the first display unit is configured to:
mapping the relative horizontal vector into a preset horizontal coordinate system to obtain a horizontal coordinate corresponding to an end point of the relative horizontal vector in the preset horizontal coordinate system;
determining a location of the first spatial azimuth identification in the azimuth wheel based on the horizontal coordinate.
In some embodiments, the first spatial orientation indicator includes a first and a second spatial orientation reference identifier, where the first and second spatial orientation reference identifiers are used to indicate a horizontal orientation of the target object, and provide an orientation reference for a user.
In some embodiments, the first spatial orientation indicator comprises a plurality of first spatial indicator regions, the first display unit further configured to:
determining a first spatial indicator region to which the first spatial orientation indication identification belongs on the first spatial orientation indicator as a target region;
determining a preset o' clock direction parameter corresponding to the target area;
the o' clock direction parameter is displayed in the first spatial orientation indicator.
In some embodiments, the specified trigger instruction comprises a conditional trigger instruction, the second display unit comprising:
a relative distance subunit, configured to calculate a relative distance between the target object and the virtual character according to the spatial three-dimensional position;
and the triggering subunit is used for triggering a conditional triggering instruction to display a second space orientation mark when the relative distance belongs to a preset distance range.
In some embodiments, the trigger subunit is further configured to:
and when the relative distance does not belong to the preset distance range, canceling to display the second space orientation mark.
In some embodiments, the triggering subunit is further configured to display a preset distance prompt identifier.
In some embodiments, the specified trigger instruction includes an operation trigger instruction, and the second display unit includes a second identification subunit configured to display a second spatial orientation identification when detecting the operation trigger instruction triggered by the user for the first spatial orientation indicator.
In some embodiments, the first spatial orientation indicator comprises an object identification of the target object, and the second identification subunit is configured to display a second spatial orientation identification when detecting an operation trigger instruction triggered by a user for the object identification.
In some embodiments, the second identification subunit is further configured to cancel displaying the second spatial orientation indicator when detecting a spatial orientation hiding instruction triggered by the user for the first spatial orientation indicator.
The embodiment of the invention also provides a terminal, which comprises a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute steps in any of the method for displaying directions in a virtual three-dimensional scene provided by the embodiment of the invention.
The embodiment of the invention also provides a computer readable storage medium, which stores a plurality of instructions, wherein the instructions are suitable for being loaded by a processor to execute the steps in any of the method for displaying the direction in the virtual three-dimensional scene provided by the embodiment of the invention.
Because the current display devices map three-dimensional objects in the virtual three-dimensional scene onto a two-dimensional screen, and display the picture of the virtual three-dimensional scene to a user on the screen in the form of a two-dimensional image, the current display devices often lose position information of certain dimensions when displaying the azimuth in the virtual three-dimensional scene, so that the displayed azimuth is inaccurate, and the display mode is not concise and visual.
For example, as shown in fig. 1a, in some three-dimensional role-playing electronic games, a player can only see the player's own position, his own orientation, and the position of a specific game item from a game map, and when the player wants to find the position of the specific game item, the player needs to rotate left and right to turn the view angle so that the character identifier of the player in the game map points to the identifier of the game item, and move forward so that the distance between the identifier of the player and the identifier of the specific game item on the game map is continuously reduced until the player finds the specific game item in a virtual three-dimensional scene under the guidance of the map.
However, when the distance between the player and a specific game prop is too far or the map is too large, the map UI may be too large to cover the game screen, so that the direction indication is not concise enough, and the user experience is affected; in addition, the game map can only show the position between the player and the specific game prop on the horizontal plane, and when the specific game prop is positioned at the top, the bottom, the obliquely upper side, the obliquely lower side and the like of the player character, the position information of the vertical dimension of the specific game prop cannot be accurately shown to the player.
Referring to fig. 1b, the embodiment of the present invention may acquire a spatial three-dimensional position of a target object in a virtual three-dimensional scene relative to a virtual character manipulated by a user, where the spatial three-dimensional position includes a horizontal position and a vertical position; displaying a first spatial orientation indicator, the first spatial orientation indicator comprising a first spatial orientation identifier, the first spatial orientation identifier being used at least to indicate a horizontal position; and when the specified trigger instruction is detected to trigger, displaying a second space orientation mark, wherein the second space orientation mark is at least used for indicating the vertical position.
According to the embodiment of the invention, the horizontal position relation between the virtual role controlled by the user and the target object is mapped into the first space orientation mark, and the vertical position relation between the virtual role controlled by the user and the target object is mapped into the second space orientation mark, so that the first space orientation mark and the second space orientation mark point to different directions along with the change of the relative space position between the virtual role controlled by the user and the target object, and the three-dimensional space orientation of the target object is marked.
Compared with the current azimuth display method, the method and the device can display the relative horizontal position between the target object and the virtual character and the relative vertical position between the target object and the virtual character, so that the relative three-dimensional azimuth between the target object and the virtual character can be accurately and simply displayed in the virtual three-dimensional scene, and the accuracy of the azimuth display method in the virtual three-dimensional scene can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a map UI provided by an embodiment of the invention;
fig. 1b is a schematic view of a method for displaying an azimuth in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1c is a schematic flow chart of a method for displaying an azimuth in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1d is a schematic diagram of a second spatial orientation indicator of a method for displaying orientation in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1e is a schematic diagram of a first spatial orientation indicator of a method for displaying orientation in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1f is a schematic diagram of another first spatial orientation indicator of a method for displaying an orientation in a virtual three-dimensional scene according to an embodiment of the present invention;
fig. 1g is a schematic diagram of a first spatial orientation reference identifier of a method for displaying an orientation in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1h is a schematic diagram of spatial orientation identification of an orientation display method in a virtual three-dimensional scene according to an embodiment of the present invention;
FIG. 1i is a schematic diagram of an azimuth scale and a first spatial azimuth indicator of an azimuth display method in a virtual three-dimensional scene provided by an embodiment of the present invention;
fig. 1j is a schematic view showing a visual enhancement area name and a scene area name of a direction display method in a virtual three-dimensional scene according to an embodiment of the present invention;
fig. 2a is a schematic diagram of a first frame of a virtual three-dimensional scene in which the azimuth display method according to the embodiment of the present invention is applied;
fig. 2b is a schematic diagram of a second picture of an application of the azimuth display method in a virtual three-dimensional scene in the game scene according to the embodiment of the present invention;
fig. 2c is a schematic diagram of a third picture of an application of the azimuth display method in a virtual three-dimensional scene in the game scene according to the embodiment of the present invention;
fig. 2d is a schematic flow chart of a method for displaying an azimuth in a virtual three-dimensional scene, which is applied to a cloud game;
fig. 3 is a schematic structural diagram of an azimuth display device in a virtual three-dimensional scene according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides a method, a device, a terminal and a storage medium for displaying directions in a virtual three-dimensional scene.
The azimuth display device in the virtual three-dimensional scene can be integrated in electronic equipment, and the electronic equipment can be a terminal, a server and other equipment. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer (Personal Computer, PC) or the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the position display device in the virtual three-dimensional scene may be integrated in a plurality of electronic devices, for example, the position display device in the virtual three-dimensional scene may be integrated in a plurality of servers, and the position display method in the virtual three-dimensional scene of the present invention is implemented by the plurality of servers.
For example, in some embodiments, the orientation display device in the virtual three-dimensional scene may be integrated in a terminal and server cluster, thereby implementing Cloud Gaming (Cloud Gaming); the server can render the picture by the method for displaying the position in the virtual three-dimensional scene, and send the rendered picture to the terminal through the network so as to play the rendered picture at the terminal, thereby reducing the consumption of computing resources of the terminal and improving the picture quality of the picture displayed by the terminal.
In some embodiments, the server may also be implemented in the form of a terminal, for example, a personal computer may be configured as a server to integrate the orientation display device in the virtual three-dimensional scene.
For example, the electronic device may be a mobile terminal that may acquire, through a network, a spatial three-dimensional position of a target object in a virtual three-dimensional scene relative to a virtual character manipulated by a user, where the spatial three-dimensional position may include a horizontal position and a vertical position; displaying a first spatial orientation indicator, the first spatial orientation indicator comprising a first spatial orientation identifier, the first spatial orientation identifier being used at least to indicate a horizontal position; and when the specified trigger instruction is detected to trigger, displaying a second space orientation mark, wherein the second space orientation mark is at least used for indicating the vertical position.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
In this embodiment, a method for displaying a direction in a virtual three-dimensional scene is provided, as shown in fig. 1c, the specific flow of the method for displaying a direction in a virtual three-dimensional scene may be as follows:
101. and acquiring the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual role manipulated by the user, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position.
The virtual three-dimensional scene is a digitized environment scene which is fictitious in a computer, and may be composed of a plurality of virtual three-dimensional models, for example, a virtual sky box (SkyBox), a tree model, a house model, a particle model, a character model, and the like.
The virtual three-dimensional scene can be a game scene of a three-dimensional electronic game, a simulation scene of a three-dimensional simulation application, and the like.
The target object is a virtual object that a user finds in a virtual three-dimensional scene, and the target object may be a virtual object, a virtual scene location, a virtual scene area, a virtual character, etc. in the virtual three-dimensional scene.
For example, the target object may be a virtual model of a tree in the virtual three-dimensional scene, e.g., the target object may also be a "balcony" region in the virtual three-dimensional scene, e.g., the target object may also be a particular virtual persona in the virtual three-dimensional scene, etc.
The spatial three-dimensional position may include a horizontal position and a vertical position, wherein the spatial three-dimensional position may be expressed by a three-dimensional coordinate system formed by adding a third three-dimensional coordinate (i.e., a Z-axis) according to a right-hand rule on the basis of a two-dimensional cartesian coordinate system, and for example, coordinates of a point in the spatial three-dimensional position may be expressed in (x, y, Z).
Wherein horizontal position refers to a position in a certain horizontal plane of the virtual three-dimensional scene, such as a position (x, y) on horizontal plane z=0; the vertical position refers to a position in a certain vertical plane of the virtual three-dimensional scene, which is dimension information including a height, for example, a position (z) on the vertical plane x=y=0, further for example, a position (y, z) on the vertical plane x=0, and so on.
There are various methods for acquiring the spatial three-dimensional position, for example, acquiring from a server through a network, reading in a local memory, formulating by a user input, formulating by a technician input, and the like.
For example, in some embodiments, when the virtual three-dimensional scene is a game scene of an electronic game, in order to improve user experience and game freedom, a player may select a game item in a game item list as a target object, and the terminal may send information of the target object to the server through the network, so that the server queries a spatial three-dimensional position of the target object, and returns the spatial three-dimensional position of the target object to the terminal, and the terminal calculates the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to a virtual character manipulated by the user.
In some embodiments, the relative spatial vector between the target object and the virtual character may be determined from the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene.
The relative space vector may be composed of information such as a relative distance, a direction, and an angle between the target object and the virtual character.
For example, in some embodiments, the relative spatial vector B-ase:Sub>A of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user may be calculated according to the coordinates ase:Sub>A of the spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene and the coordinates of the spatial three-dimensional position B of the target object in the virtual three-dimensional scene.
102. A first spatial orientation indicator is displayed, the first spatial orientation indicator comprising a first spatial orientation identification, the first spatial orientation identification being at least for indicating a horizontal position.
In some embodiments, to be able to indicate a horizontal position in the spatial position, a first spatial position identification may be displayed in the screen of the virtual three-dimensional scene, which first spatial position identification may be used at least to indicate a horizontal position.
In some embodiments, the first spatial orientation identification may also be used to indicate a vertical position.
The first spatial orientation indicator may be a graphical user interface control formed by an image, text, a three-dimensional geometric body, a two-dimensional geometric image, a number, etc., for example, the appearance of the preset first spatial orientation indicator may be an image of a two-dimensional ring.
The first spatial orientation indicator is a control for indicating orientation and can be composed of a first spatial orientation reference mark, a direction wheel disc and an object mark of a target object.
The character first spatial orientation reference mark is an identification control used for representing the horizontal orientation of the virtual character relative to a target object in visual effect, and the object identification of the target object can be represented in the form of characters, images, numbers and the like.
The first spatial orientation mark can slide on the direction wheel disc.
For example, in some embodiments, the horizontal position of the target object in the virtual three-dimensional scene may be acquired in step 101, and the horizontal position of the virtual character manipulated by the user in the virtual three-dimensional scene may be obtained in the following specific steps in step 102:
A. determining a relative horizontal vector between the target object and the virtual character according to the horizontal position of the target object in the virtual three-dimensional scene and the horizontal position of the virtual character controlled by the user in the virtual three-dimensional scene;
B. mapping the relative horizontal vector into a preset horizontal coordinate system to obtain a horizontal coordinate corresponding to the end point of the relative horizontal vector in the preset horizontal coordinate system;
C. generating a first spatial orientation reference mark corresponding to the target object on a preset first spatial orientation indicator based on the horizontal coordinate to obtain a first spatial orientation indicator marked with the horizontal orientation of the target object;
D. a first spatial orientation indicator is displayed in a picture of the virtual three-dimensional scene.
In some embodiments, the first spatial orientation indicator may be a two-dimensional geometric figure, and the first spatial orientation indicator may include a first spatial orientation reference identifier therein, which may be used to provide an orientation reference to the user when the first spatial orientation reference identifier indicates a horizontal orientation in which the target object is located.
For example, referring to fig. 1e, the preset first spatial orientation indicator may be a two-dimensional image, and the first spatial orientation reference mark P may be a two-dimensional planar image in a droplet shape, and the first spatial orientation reference mark may indicate a horizontal orientation of the target object on the first spatial orientation indicator.
Wherein the first spatial orientation indicator may comprise a first spatial orientation reference identifier, which may be a circular image comprising the o 'clock directions 3, 6, 9, 12, which may provide the orientation reference in o' clock directions (3 o 'clock directions, 6 o' clock directions, 9 o 'clock directions, 12 o' clock directions).
In some embodiments, to further improve the indication effect of the horizontal direction and improve the user experience, the first spatial direction indicator may be divided into a plurality of first spatial indicator areas, each of the first spatial indicator areas corresponding to a preset o' clock direction parameter, and when the first spatial direction indicator is displayed in the screen of the virtual three-dimensional scene, the method further includes the following steps:
determining a first space indicator region to which the horizontal coordinate belongs on the first space orientation indicator as a target region;
Determining an o' clock direction parameter corresponding to the target area;
the o' clock direction parameter is displayed in the first spatial orientation indicator.
The first spatial direction indicator may be divided into a plurality of first spatial indicator areas on average, each of which corresponds to a preset o 'clock direction parameter, which may be expressed in the form of numerals, texts, symbols, etc., which is a direction expressed in a point of 12 hours of the clock, i.e., an X o' clock direction.
For example, referring to fig. 1e, the first spatial direction indicator may be divided into 12 areas according to clocks, the areas corresponding to clocks 1, 2, 3,..12, and in fig. 1e, the first spatial direction indicator area to which the first spatial direction indicator belongs is a 2 o ' clock area, so that an o ' clock direction parameter "2" corresponding to the 2 o ' clock area may be displayed in the first spatial direction indicator.
In some embodiments, to reduce the computational resources consumed by image rendering, the first spatial orientation indicator may also be a two-dimensional image, and the first spatial orientation reference identifier may be presented in text (the "two o' clock above" text in fig. 1 f) that indicates the horizontal position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user.
In some embodiments, in order to simplify the effect of the indication of the horizontal direction, a first spatial direction reference identifier corresponding to the target object may be generated at an edge of the preset first spatial direction indicator based on the horizontal coordinate, so as to obtain the first spatial direction indicator indicating the horizontal direction in which the target object is located.
In some embodiments, in order to represent the absolute distance between the target object and the character manipulated by the user on the horizontal plane, and more accurately indicate the azimuth, and the relative distance between the target object and the virtual character, a first spatial azimuth reference identifier corresponding to the target object may be generated inside or outside the preset first spatial azimuth indicator based on the horizontal coordinate, so as to obtain the first spatial azimuth indicator indicating the horizontal azimuth of the target object.
For example, referring to fig. 1g, in some embodiments, the first spatial orientation indicator is a two-dimensional geometry, which may be a two-dimensional ring, and referring to the left and right portions of fig. 1g, a relative horizontal vector oP between the target object and the virtual character may be mapped onto a horizontal plane of z=0 of a preset reference coordinate system, resulting in a horizontal coordinate P' of the relative horizontal vector oP projected onto the horizontal plane of z=0.
Based on the horizontal coordinate P', a first spatial orientation reference identifier corresponding to the target object can be generated on the preset first spatial orientation indicators shown in the left part and the right part of fig. 1g, so as to obtain a first spatial orientation indicator indicating the horizontal orientation of the target object.
For example, referring to the left part and the left part of fig. 1g, the first spatial orientation reference identifier may be directly displayed on the horizontal coordinate P' on the preset first spatial orientation indicator, so as to obtain the first spatial orientation reference identifier indicating the horizontal orientation where the target object is located.
Referring to the right part and the right part of fig. 1g, a ray passing through the horizontal coordinate P' can be drawn at a preset center point of a preset first spatial orientation indicator, so that the ray intersects with the edge of the preset first spatial orientation indicator to obtain a target intersection point Q, and a first spatial orientation reference mark is displayed on the target intersection point Q to obtain a first spatial orientation reference mark indicating the horizontal orientation of the target object.
In some embodiments, to further simplify the effect of the orientation indication, and improve the user experience, the present solution provides an orientation scale on which the first spatial orientation indicator can be displayed in a moving manner.
The azimuth scale is a scale-shaped azimuth indication control in the graphical user interface, and azimuth scales can be included in the azimuth scale, and in some embodiments, in order to reduce difficulty of understanding the azimuth scale by a player and improve user experience, text information such as numbers, characters, symbols and the like of 105, 120, 135, south, southwest and the like can be included in the azimuth scale as scale references.
In some embodiments, a portion of the azimuth scale may be displayed because the full azimuth scale cannot be displayed 360 degrees due to the limited screen size.
It should be noted that the azimuth scale may be in the form of a straight scale, or in the form of a compass, an arc scale, an annular scale, a spherical compass, or the like.
For example, in some embodiments, the azimuth scale may take the form of a ruler, and step 103 may include the steps of:
displaying an azimuth scale in a picture of the virtual three-dimensional scene;
acquiring the visual field orientation of the virtual character controlled by the user;
and controlling the first spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, so that the first spatial orientation indicator indicates the horizontal position of the target object relative to the control virtual character in the virtual three-dimensional scene on the orientation scale.
The visual Field orientation may be replaced by a visual Field range, and the visual Field range may include information such as a visual Field angle (FOV) Of the virtual character manipulated by the user, and a visual Field orientation.
For example, referring to FIG. 1i, as the field of view orientation changes, a first spatial position indicator on the position scale may move left and right, pointing at the horizontal position of the target object in the virtual three-dimensional scene at all times on the position scale.
In some embodiments, in order to reduce the amount of computation consumed by image rendering, so that the user interface is simpler, the accuracy of azimuth indication is improved, and the user experience is improved, the first spatial azimuth indicator can be divided into an object identifier of a target object and an azimuth wheel, only the object identifier of the target object is displayed when the virtual character is far away from the target object, and the azimuth wheel of the first spatial azimuth indicator is displayed when the virtual character is close to the target object.
Wherein the object identification of the target object may be an identification indicating a horizontal position of the target object with respect to the virtual character manipulated by the user.
For example, in some embodiments, the step of controlling the first spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character may comprise the steps of:
Calculating the relative horizontal distance between the target object and the virtual character according to the horizontal position of the target object in the virtual three-dimensional scene and the horizontal position of the virtual character controlled by the user in the virtual three-dimensional scene;
when the relative horizontal distance does not belong to the preset distance range, according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, controlling the object mark of the target object to be displayed on the azimuth scale and hiding the azimuth wheel disc;
when the relative horizontal distance belongs to a preset distance range, the object identification of the target object and the azimuth wheel disc are controlled to be displayed on the azimuth scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character.
Wherein the step of controlling the first spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character may further comprise the steps of:
calculating the relative spatial distance between the target object and the virtual character according to the spatial three-dimensional position of the target object in the virtual three-dimensional scene, which is controlled by the user, of the virtual character in the virtual three-dimensional scene;
when the relative spatial distance does not belong to the preset distance range, according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, controlling the object mark of the target object to be displayed on the azimuth scale and hiding the azimuth wheel disc;
When the relative spatial distance belongs to a preset distance range, the object identification of the target object and the azimuth wheel disc are controlled to be displayed on the azimuth scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character.
The preset distance range can be formulated by technicians according to requirements, and can be adjusted by users according to actual use requirements, and specific numerical values of the preset distance range are not required.
The object identifier of the target object and the representation form and representation content of the azimuth wheel can also be set by a technician and/or a user, and are not required here.
For example, referring to FIG. 1i, the object identifier of the target object of the first spatial orientation indicator is a diamond-like two-dimensional image, the orientation wheel comprises a circular ring-like two-dimensional geometric image around the diamond, a first spatial orientation reference identifier in the form of a water drop, and a first second spatial orientation reference identifier comprising the numbers 3, 6, 9, 12, etc.; when the horizontal distance of the virtual character from the target object is far, only the object identification of the target object of the first spatial orientation indicator may be displayed, and the object identification of the target object of the first spatial orientation indicator and the orientation wheel may not be displayed at the same time until the horizontal distance of the virtual character from the target object is less than 30 meters.
103. And when the specified trigger instruction is detected to trigger, displaying a second space orientation mark, wherein the second space orientation mark is at least used for indicating the vertical position.
Wherein, the specified trigger instruction may include a conditional trigger instruction and an operation trigger instruction; the condition triggering instruction is an instruction generated after a certain condition is achieved, and the operation triggering instruction is an instruction generated after a certain operation triggered by a user is detected.
For example, the conditional trigger instruction may be generated by a distance trigger between the target object and the virtual character; for example, the operation trigger instruction may be triggered by a user input through a mouse, or the like.
Wherein the second spatial orientation identifier is an identification control for indicating an orientation, the second spatial orientation identifier may be included in the second spatial orientation indicator in some embodiments.
The second spatial orientation mark is an identification control for representing the spatial orientation of the target object relative to the virtual character in visual effect, and can be represented in the form of characters, images, three-dimensional images and the like.
For example, referring to FIG. 1d, the second spatial orientation indicator is a three-dimensional sphere and the second spatial orientation indicator is a three-dimensional image in the form of a drop of water, and the second spatial orientation indicator may indicate on the second spatial orientation indicator the vertical orientation in which the target object is located.
In addition, in some embodiments, in order to make the user interface more concise and efficient, the second spatial orientation indicator may be divided into a plurality of spatial orientation indication parts, and the second spatial orientation indicator may control some of the spatial orientation indication parts to be displayed or hidden according to time, a position of a virtual character manipulated by a user in the virtual three-dimensional scene, an instruction issued by the user, and the like.
In some embodiments, to enhance the user experience and the effect of the orientation indication, the second spatial orientation indicator may include a second spatial orientation reference identifier, where the second spatial orientation reference identifier is a reference for providing a vertical orientation to the user when the second spatial orientation identifier indicates the spatial orientation in which the target object is located, and the second spatial orientation reference identifier may be represented in the form of text, an image, a three-dimensional image, or the like.
For example, referring to FIG. 1d, in some embodiments, the second spatial orientation indicator may be a three-dimensional sphere and the second spatial orientation indicator may include an orientation reference identifier, wherein the second spatial orientation reference identifier may be text content including east, south, west, north, up, down, for providing a vertical orientation reference to the user when the second spatial orientation identifier indicates the spatial orientation in which the target object is located.
In some embodiments, the second spatial orientation reference identification may be text content containing latitude and longitude information.
In some embodiments, in order to further improve the labeling effect of the spatial orientation and improve the user experience, the second spatial orientation indicator may be divided into a plurality of second spatial orientation indicator areas, and when generating, based on the spatial coordinates, a second spatial orientation identifier corresponding to the target object on the second spatial orientation indicator, the following steps may be performed in addition to the second spatial orientation indicator that labels the spatial orientation where the target object is located:
determining a second spatial indicator region to which the spatial coordinates belong on the second spatial orientation indicator as a visual enhancement region;
the color parameters of the visual enhancement region are modified.
The problem that distortion and dimensional information loss may be caused when the three-dimensional space is converted into a two-dimensional picture visible on the screen is solved, and the visual enhancement area can improve the image effect of the space orientation mark when the space orientation is marked by modifying the color and the like, so that the user experience is improved.
For example, referring to fig. 1d, the second spatial orientation indicator may be a three-dimensional sphere, the surface of which may be equally divided into 8 second spatial indicator regions, respectively an upper northwest region, an upper northeast region, an upper southwest region, a lower northwest region, a lower northeast region, a lower southwest region, and when the spatial orientation indicator of fig. 1d is in the upper northeast region, the color of the upper northeast region is modified to be highlighted.
The method effectively solves the problem that when the space azimuth mark is positioned on the upper northeast side, a user can misunderstand that the space azimuth mark is positioned on the upper southeast side, and the three-dimensional space is converted into a two-dimensional picture visible on a screen, so that X-axis information is lost.
In addition, in some embodiments, to further improve the image effect of the spatial orientation mark when the spatial orientation is indicated, the visual enhancement region name corresponding to the visual enhancement region may also be displayed inside, outside, around, or elsewhere in the screen of the second spatial orientation indicator.
The visual enhancement area names corresponding to the visual enhancement areas may be formulated by a technician, and different visual enhancement areas may correspond to different visual enhancement area names, or may correspond to the same visual enhancement area names, for example, in the second spatial orientation indicator of fig. 1d, the visual enhancement areas corresponding to the upper northwest area and the lower northwest area are "northwest", the visual enhancement areas corresponding to the upper northeast area and the lower northeast area are "northeast", the visual enhancement areas corresponding to the upper southwest area and the lower southwest area are "southwest", and so on.
In addition, in some embodiments, to further enhance the direction indication effect, the scene area name where the target object is currently located in the virtual three-dimensional scene may also be displayed inside, outside, around, or other positions in the screen of the second spatial direction indicator.
Wherein the scene area name may correspond to a certain scene area in the virtual three-dimensional scene, the scene area name and the relation of the scene area name and the scene area may be set by a technician, for example, for a balcony area in the virtual three-dimensional scene named "balcony", the balcony area is in the world coordinate system of the virtual three-dimensional scene, the balcony area is a rectangular parallelepiped space area, and 8 vertex coordinates of the rectangular parallelepiped may represent the balcony area, as in [ (0, 0), (2,0,0), (0, 1, 0), (2, 1, 2), (0, 2), (2,0,2), (0, 1, 2), (2, 1, 2) ].
When the spatial three-dimensional position of the target object in the virtual three-dimensional scene is in the balcony region, a name of the balcony region may be displayed around the second spatial orientation indicator.
In some embodiments, to further improve the image effect of the second spatial orientation mark when the vertical orientation is indicated, the visual enhancement region name corresponding to the visual enhancement region may also be displayed inside, outside, around, or other positions in the screen of the second spatial orientation indicator, and the scene region name where the target object is currently located in the virtual three-dimensional scene.
For example, after the step of "determining the second spatial indicator region to which the spatial coordinates belong on the second spatial orientation indicator as the visual enhancement region", the following steps may be further performed:
determining a visual enhancement region name corresponding to the visual enhancement region;
in addition to the step of displaying the second spatial orientation indicator in the picture of the virtual three-dimensional scene, the following steps may be performed:
determining a scene area name corresponding to the spatial three-dimensional position of the target object in the virtual three-dimensional scene according to the spatial three-dimensional position of the target object in the virtual three-dimensional scene;
the visual enhancement region name and the scene region name are displayed in a second spatial orientation indicator.
For example, referring to fig. 1j, a visual enhancement region name and a scene region name may be displayed below the second spatial orientation indicator, and fig. 1j illustrates four scenarios in which a visual enhancement region name and a scene region name are displayed below the second spatial orientation indicator.
In some embodiments, particularly when displaying the second spatial orientation indicator, the display may be performed according to the following steps:
mapping the relative space vector to a preset space coordinate system to obtain a space coordinate corresponding to the end point of the relative space vector in the preset space coordinate system;
Based on the space coordinates, generating a space orientation mark corresponding to the target object on a preset second space orientation indicator template to obtain a second space orientation indicator marking the space orientation of the target object.
Wherein the relative spatial vector may be obtained in step 101 as follows:
acquiring a spatial three-dimensional position of a target object in a virtual three-dimensional scene and a spatial three-dimensional position of a virtual character controlled by a user in the virtual three-dimensional scene, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position;
and determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene.
The spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user can be calculated from the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene.
In some embodiments, referring to fig. 1f, to reduce computational resources consumed by image rendering, the second spatial orientation indicator may be a two-dimensional image, and the second spatial orientation identification may be presented in text ("two o' clock above" text in fig. 1 f) that identifies the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user.
Similarly, in some embodiments, to simplify the effect of the indication of the spatial orientation, a second spatial orientation identifier corresponding to the target object may be generated on the surface of the second spatial orientation indicator based on the spatial coordinates, so as to obtain a second spatial orientation indicator indicating the vertical orientation in which the target object is located.
In some embodiments, in order to represent the absolute distance between the target object and the character manipulated by the user on the space plane, the position is more accurately marked, and the relative distance between the target object and the virtual character can be based on the space coordinates, a space position identifier corresponding to the target object is generated inside or outside the preset second space position indicator, so as to obtain the second space position indicator indicating the space position of the target object.
In some embodiments, the preset spatial coordinate system may be a parameter of the preset second spatial azimuth indicator, so that a point corresponding to the spatial coordinate may be directly found in the second spatial azimuth indicator, and a second spatial azimuth identifier corresponding to the target object is generated on the point, so as to obtain the second spatial azimuth indicator indicating the spatial azimuth of the target object.
For example, the spatial coordinates α (B) may be determined directly on the second spatial orientation indicator 1 -A 1 ,B 2 -A 2 ,B 3 -A 3 ) The location of the second spatial orientation is identified.
The second spatial orientation indicator may be a graphical user interface control formed by an image, text, a three-dimensional geometric body, a two-dimensional geometric image, a number, etc., for example, the appearance of the preset second spatial orientation indicator may be a three-dimensional spherical image.
The preset space coordinate system can be set by a technician, namely, the mapping parameters can be manually formulated, and the coordinate system can be a three-dimensional coordinate system or a two-dimensional coordinate system.
For example, in some embodiments, the preset spatial coordinate system is a three-dimensional cartesian coordinate system.
For example, in some embodiments the relative space vector (B 1 -A 1 ,B 2 -A 2 ,B 3 -A 3 ) Mapping to ase:Sub>A preset space coordinate system alphase:Sub>A to obtain ase:Sub>A space coordinate alphase:Sub>A (B) corresponding to the end point of the relative space vector B-A in the preset space coordinate system alphase:Sub>A 1 -A 1 ,B 2 -A 2 ,B 3 -A 3 )。
For example, referring to the left and right parts and the right and left parts of fig. 1h, the mapping method is similar to that of the first spatial orientation indicator, and will not be described herein.
In some embodiments, the second spatial orientation indicator may be displayed in a picture of the virtual three-dimensional scene.
It should be noted that the second spatial orientation indicator and the first spatial orientation indicator in the frame of the virtual three-dimensional scene may be displayed simultaneously, for example, the second spatial orientation indicator and the first spatial orientation indicator in the frame of the virtual three-dimensional scene may be displayed adjacently and in an overlapping manner; the second spatial orientation indicator and the first spatial orientation indicator in the picture of the virtual three-dimensional scene may also display only one of them at certain times, e.g. the second spatial orientation indicator may be displayed only and the first spatial orientation indicator may not be displayed, or the second spatial orientation indicator may be displayed only the first spatial orientation indicator may not be displayed, or the second spatial orientation indicator and the first spatial orientation indicator may be displayed switched, etc.
In some embodiments, to preserve the simplicity of the screen, the second spatial orientation indicator may be displayed in the screen of the virtual three-dimensional scene only when an operation trigger instruction triggered by the user for the first spatial orientation indicator is detected, and the second spatial orientation indicator may be canceled from being displayed when a spatial orientation concealment instruction triggered by the user for the first spatial orientation indicator is detected.
The display instruction and the hidden instruction can be triggered by clicking, touching, selecting, double-clicking the first spatial orientation reference mark and other operations by a user.
For example, only the first spatial orientation reference mark may be displayed in the screen, and when the user is detected to click on the first spatial orientation reference mark, a second spatial orientation indicator is popped up and displayed below the first spatial orientation reference mark; when the user clicks the first space orientation reference mark again, the second space orientation indicator is canceled from being displayed, and only the first space orientation reference mark is displayed in the picture.
The display command and the hidden command may be triggered by a fixed time, a character state of a virtual character controlled by the user, an applied game skill, a spatial position, or the like.
For example, in some embodiments, in order to ensure the simplicity of the picture and the accuracy of the direction indication, and improve the user experience, only the first spatial orientation indicator may be displayed in the picture of the virtual three-dimensional scene, and when the relative horizontal distance between the target object and the virtual character is detected to be relatively close, the first spatial orientation indicator is hidden, and the second spatial orientation indicator is displayed at the position of the first spatial orientation indicator; when the relative horizontal distance between the target object and the virtual character is detected to be far, hiding the second spatial orientation indicator, and displaying the first spatial orientation indicator at the position of the second spatial orientation indicator.
In some embodiments, to further simplify the effect of the orientation indication, and improve the user experience, the present solution provides an orientation scale on which the second spatial orientation indicator and/or the first spatial orientation indicator may be displayed in a moving manner.
As another example, similarly, step 103 may include the steps of:
displaying an azimuth scale in a picture of the virtual three-dimensional scene;
acquiring the visual field orientation of the virtual character controlled by the user;
and controlling the second spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, so that the second spatial orientation indicator indicates the horizontal position of the target object relative to the control virtual character in the virtual three-dimensional scene on the orientation scale.
Wherein the step of controlling the second spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character may comprise the steps of:
calculating the relative horizontal distance between the target object and the virtual character according to the horizontal position of the target object in the virtual three-dimensional scene and the horizontal position of the virtual character controlled by the user in the virtual three-dimensional scene;
And when the relative horizontal distance belongs to the preset distance range, controlling the second space azimuth indicator to display on the azimuth scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character.
In some embodiments, to attract the attention of the user and remind the user to watch the second spatial orientation indicator, the method may further include the following steps after determining the relative distance between the target object and the virtual character:
and when the relative distance is smaller than a preset distance threshold value, displaying a preset distance prompt mark in the azimuth scale.
The preset distance prompt mark can be an image, a word, an image, a particle special effect, a screen flicker and the like with a prompt effect.
For example, the preset distance cue identification may be a highlighted lightning-like image.
In some embodiments, the step of displaying the preset distance cue identification in the azimuth scale may include the steps of:
displaying a preset distance prompt identifier on an azimuth scale;
the distance prompt identifier is controlled to move on the azimuth scale, and the position of the distance prompt identifier relative to the first spatial azimuth reference identifier is controlled to be unchanged.
From the above, the embodiment of the invention can obtain the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual role operated by the user, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position; displaying a first spatial orientation indicator, the first spatial orientation indicator comprising a first spatial orientation identifier, the first spatial orientation identifier being used at least to indicate a horizontal position; and when the specified trigger instruction is detected to trigger, displaying a second space orientation mark, wherein the second space orientation mark is at least used for indicating the vertical position.
Therefore, through a mapping mode, the first space orientation indicator and the second space orientation indicator can be displayed on a picture, the first space orientation identifier in the first space orientation indicator can be used for briefly marking the horizontal position of the target object in the virtual three-dimensional scene relative to the virtual role operated by the user, and the second space orientation identifier in the second space orientation indicator can be used for briefly marking the vertical position of the target object in the virtual three-dimensional scene relative to the virtual role operated by the user.
The method described in the above embodiments will be described in further detail below.
In this embodiment, a method for displaying a direction in a virtual three-dimensional scene based on a cloud game will be taken as an example, and the method in the embodiment of the present invention will be described in detail.
The player can control the game character to search for a target game object in a three-dimensional game scene of the cloud game, referring to fig. 2a, an azimuth scale can be displayed in the game picture, the player can control the game character to move, rotate in view angle, shoot, aim, and the like at the client, when the player controls the player to rotate left and right, the direction of the view angle of the player can also rotate, and an azimuth scale on the azimuth scale and a first spatial azimuth reference mark, azimuth characters, azimuth numbers, and the like of the target game object can all move, display, cancel display, and the like on the azimuth scale according to the direction of the view angle of the player.
When the object identifier of the target object of the first spatial position indicator corresponding to the target game object appears in the position scale of the game screen, the player can move forward according to the direction of the object identifier of the target object until the position wheel shown in fig. 2b appears in the position scale of the game screen, and a distance prompt identifier (a highlighted lightning image) indicating that the player character has approached the target game object.
Referring to fig. 2c, when the user clicks the first spatial orientation indicator, a second spatial orientation indicator may pop up under the first spatial orientation indicator, where the second spatial orientation indicator may include a second spatial orientation reference identifier and a spatial orientation identifier, and the player may search for a target game object in a different vertical plane from the second spatial orientation indicator according to the guidance of the second spatial orientation indicator.
When the user clicks on the first spatial orientation indicator again or the game character is too far from the target game object, the second spatial orientation indicator may be hidden and only the first spatial orientation indicator may be displayed in the user interface.
As shown in fig. 2d, a specific flow of a method for displaying a direction in a virtual three-dimensional scene based on a cloud game is as follows:
201. When receiving a movement instruction of a game character of a player, the client transmits the movement instruction to the server.
202. And when the server receives the movement instruction, modifying the current three-dimensional coordinates of the player character in the three-dimensional game scene of the cloud game according to the movement instruction.
203. The server determines three-dimensional coordinates of the player character currently in the three-dimensional game scene of the cloud game and three-dimensional coordinates of the target game object in the three-dimensional game scene of the cloud game.
204. The server determines a spatial three-dimensional position of the target game object relative to the player character based on the three-dimensional coordinates of the player character and the three-dimensional coordinates of the target game object, the spatial three-dimensional position including a horizontal position and a vertical position.
205. The server generates a first spatial orientation reference mark corresponding to the target game object on the first spatial orientation indicator based on the horizontal position, so as to obtain the first spatial orientation indicator marking the horizontal orientation of the target game object, and generates a second spatial orientation mark corresponding to the target game object on the preset second spatial orientation indicator based on the vertical position, so as to obtain the second spatial orientation indicator marking the vertical orientation of the target game object.
206. The server renders a position scale in a game picture of the virtual three-dimensional scene of the cloud game, renders a first spatial position indicator in the position scale based on the horizontal position, calculates the horizontal distance between the player character and the target game object, and renders a second spatial position indicator below the first spatial position indicator when the horizontal distance is less than 30 meters, so as to obtain the rendered game picture.
207. The server sends the rendered game picture to the client so that the client displays the rendered game picture.
208. And when the client receives the rendered game picture, playing the rendered game picture.
As can be seen from the above, in the embodiment of the present invention, when receiving a movement instruction of a game character of a player, a client may send the movement instruction to a server; when the server receives the moving instruction, the three-dimensional coordinates of the player character in the three-dimensional game scene of the cloud game can be modified according to the moving instruction; the method comprises the steps that a server obtains the three-dimensional coordinates of a player character in a three-dimensional game scene of the cloud game and the three-dimensional coordinates of a target game object in the three-dimensional game scene of the cloud game; the server determines the spatial three-dimensional position of the target game object relative to the player character according to the three-dimensional coordinates of the player character and the three-dimensional coordinates of the target game object, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position; the server generates a first space orientation reference mark corresponding to the target game object on a preset first space orientation indicator based on the horizontal position to obtain a first space orientation indicator marking the horizontal orientation of the target game object, and generates a space orientation mark corresponding to the target game object on a preset second space orientation indicator based on the horizontal position and the vertical position to obtain a second space orientation indicator marking the horizontal orientation of the target game object; the server renders a position scale in a game picture of the virtual three-dimensional scene of the cloud game, renders a first space position indicator in the position scale based on the horizontal position, calculates the horizontal distance between the player character and the target game object, and renders a second space position indicator below the first space position indicator when the horizontal distance is less than 30 meters, so as to obtain a rendered game picture; the server sends the rendered game picture to the client so that the client displays the rendered game picture; and when the client receives the rendered game picture, playing the rendered game picture.
The azimuth display of the scheme in the three-dimensional game scene is more concise and accurate; for example, some current games indicate the direction of the target object in the game scene, for example, a direction indication model (such as a stereoscopic arrow model, a particle effect, etc.) is displayed around the character model in the game scene, however, when the target object is too many or the battle is violent, the indication method can seriously interfere with the vision of the player, and affect the judgment of the player on the battle; for another example, some games currently indicate the direction of the target object on the user interface of the game, but the accuracy of direction indication through the UI is low, especially for games running on small screen terminals, such as mobile phone end games, the size of the game screen is limited, when the map UI is used to display the horizontal position of the target game object relative to the player character, the map UI may cover the game screen in a large area, and the height information cannot be intuitively and accurately represented.
According to the technical scheme, the first spatial orientation indicators can be displayed on the UI interface to indicate the horizontal orientation of the target game object relative to the player character, and when more target game objects exist, the first spatial orientation indicators corresponding to the target game objects can be simply and orderly arranged on the orientation scale UI, so that the display efficiency is improved; in addition, for the vertical direction in the three-dimensional game scene, the second spatial orientation indicator of the scheme can be more accurately and simply displayed to the player, for example, in some game scenes, the player needs to search for game props in a plurality of floors, and for whether the game props are upstairs, downstairs or the current floor, the second spatial orientation indicator of the scheme can be effectively and accurately marked so as to better help the user to perform cognitive judgment on the orientation of the second spatial orientation indicator, and the user experience is improved; further, to improve the simplicity of the UI interface, in some embodiments, when the player character is not close to the target game object, the UI interface may hide the second spatial orientation indicator, so as to solve the situation that the UI interface obscures the pictures in too many game scenes, thereby interfering with the player's line of sight.
Therefore, the embodiment of the scheme can improve the accuracy of the direction display method in the virtual three-dimensional scene.
In order to better implement the method, the embodiment of the invention also provides a position display device in the virtual three-dimensional scene, which can be integrated in electronic equipment, such as computer equipment, and the computer equipment can be equipment such as a terminal, a server and the like.
The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, a method of the embodiment of the present invention will be described in detail by taking a specific integration of a direction display device in a virtual three-dimensional scene into a smart phone as an example.
For example, as shown in fig. 3, the orientation display device in the virtual three-dimensional scene may include an acquisition unit 301, a first display unit 302, and a second display unit 303, as follows:
and (one) an acquisition unit 301.
The acquisition unit 301 may be configured to acquire a spatial three-dimensional position of the target object in the virtual three-dimensional scene with respect to the virtual character manipulated by the user, the spatial three-dimensional position including a horizontal position and a vertical position.
The acquisition unit 301 may include a three-dimensional position sub-module and a vector sub-module as follows:
(1) Three-dimensional position sub-module:
the three-dimensional position sub-module may be configured to obtain a spatial three-dimensional position of the target object in the virtual three-dimensional scene, and a spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene, where the spatial three-dimensional position may include a horizontal position and a vertical position.
In some embodiments, the three-dimensional location submodule may also be used to:
determining a scene area name corresponding to the spatial three-dimensional position of the target object in the virtual three-dimensional scene according to the spatial three-dimensional position of the target object in the virtual three-dimensional scene;
and displaying the scene area name.
(2) The vector quantum module:
the vector sub-module may be configured to determine a relative spatial vector between the target object and the virtual character based on the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene.
And (two) a first display unit 302.
The first display unit 302 may be configured to display a first spatial orientation indicator comprising a first spatial orientation identification, the first spatial orientation identification being configured to indicate at least a horizontal position.
In some embodiments, the first spatial orientation indicator may comprise an orientation wheel and the first display unit 302 may be configured to determine a position of the first spatial orientation marker in the orientation wheel based on the horizontal position.
In some embodiments, the horizontal position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user may include a relative horizontal vector, and the first display unit 302 may be configured to:
mapping the relative horizontal vector into a preset horizontal coordinate system to obtain a horizontal coordinate corresponding to the end point of the relative horizontal vector in the preset horizontal coordinate system;
a position of the first spatial azimuth identification in the azimuth wheel is determined based on the horizontal coordinates.
In some embodiments, the first spatial orientation indicator may include a horizontal orientation reference identifier therein, which may be used to indicate the horizontal orientation in which the target object is located, providing an orientation reference for the user.
In some embodiments, the first spatial orientation indicator may comprise a plurality of horizontal indicator areas, the first display unit 302, may also be used to:
determining a horizontal indicator region to which the first spatial orientation indication mark belongs on the first spatial orientation indicator as a target region;
Determining a preset o' clock direction parameter corresponding to the target area;
the o' clock direction parameter is displayed in the first spatial orientation indicator.
And (iii) a second display unit 303.
The second display unit 303 may be configured to display a second spatial orientation identifier when the specified triggering instruction trigger is detected, where the second spatial orientation identifier may be at least used to indicate a vertical position.
In some embodiments, the second display unit 303 may be configured to:
displaying an azimuth scale;
acquiring the visual field orientation of the virtual character controlled by the user;
and controlling the second spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, so that the second spatial orientation indicator indicates the horizontal position of the target object relative to the control virtual character in the virtual three-dimensional scene on the orientation scale.
In some embodiments, the second spatial orientation indicator may comprise a plurality of second spatial indicator areas, the second display unit 303, further being operable to:
determining a second spatial indicator region to which the second spatial orientation indication identifier belongs on the second spatial orientation indicator as a visual enhancement region;
the color parameters of the visual enhancement region are modified.
In some embodiments, the second display unit 303 may include a second indicator subunit that may be used to display a second spatial orientation indicator, which may include a second spatial orientation identification.
In some embodiments, the spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user may include a relative spatial vector between the target object and the virtual character, and the second indicator subunit may include a mapping sub-module and a generating sub-module as follows:
(1) Mapping submodule:
the mapping sub-module may be configured to map the relative spatial vector to a preset spatial coordinate system, so as to obtain a spatial coordinate corresponding to an endpoint of the relative spatial vector in the preset spatial coordinate system.
(2) Generating a submodule:
the generating sub-module may be configured to generate, on the basis of the spatial coordinates, a spatial orientation identifier corresponding to the target object on a preset second spatial orientation indicator template, so as to obtain a second spatial orientation indicator that indicates a spatial orientation where the target object is located.
In some embodiments, the specified trigger instruction may include a conditional trigger instruction, and the second display unit 303 may include a relative distance subunit and a trigger subunit, as follows:
(1) Relative distance subunit:
the relative distance subunit may be configured to calculate a relative distance between the target object and the virtual character based on the spatial three-dimensional position.
(2) A trigger subunit:
the triggering subunit may be configured to trigger a conditional triggering instruction to display the second spatial orientation identifier when the relative distance belongs to a preset distance range.
In some embodiments, the trigger subunit may also be configured to:
and when the relative distance does not belong to the preset distance range, canceling to display the second space orientation mark.
In some embodiments, the trigger subunit may be further configured to display a preset distance hint identifier.
In some embodiments, the specified triggering instruction may include an operation triggering instruction, and the second display unit 303 may include a second identification subunit that may be configured to display a second spatial orientation identification when detecting the operation triggering instruction triggered by the user for the first spatial orientation indicator.
In some embodiments, the first spatial orientation indicator may comprise an object identification of the target object, and the second identification subunit may be configured to display the second spatial orientation identification when detecting an operation triggering instruction triggered by the user for the object identification.
In some embodiments, the second identification subunit may be further configured to cancel display of the second spatial orientation indicator when a spatial orientation concealment instruction triggered by the user for the first spatial orientation indicator is detected.
As can be seen from the above, the azimuth display device in the virtual three-dimensional scene of the present embodiment may obtain, by the obtaining unit 301, a spatial three-dimensional position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user, where the spatial three-dimensional position includes a horizontal position and a vertical position; displaying, by the first display unit 302, a first spatial orientation indicator comprising a first spatial orientation identification for indicating at least a horizontal position; when the second display unit 303 detects that the specified trigger instruction triggers, a second spatial orientation identifier is displayed, and the second spatial orientation identifier is at least used for indicating the vertical position. Therefore, the method and the device for displaying the direction in the virtual three-dimensional scene can improve accuracy of the direction display method in the virtual three-dimensional scene.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer, a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 401 is a control center of computer device 400 and connects the various portions of the entire computer device 400 using various interfaces and lines to perform various functions of computer device 400 and process data by running or loading software programs and/or modules stored in memory 402 and invoking data stored in memory 402, thereby performing overall monitoring of computer device 400.
In the embodiment of the present application, the processor 401 in the computer device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
acquiring a space three-dimensional position of a target object in a virtual three-dimensional scene and a space three-dimensional position of a virtual character controlled by a user in the virtual three-dimensional scene;
determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene;
mapping the relative space vector to a preset space coordinate system to obtain a space coordinate corresponding to the end point of the relative space vector in the preset space coordinate system;
Based on the space coordinates, generating a space orientation mark corresponding to the target object on a preset second space orientation indicator to obtain a second space orientation indicator marked with the space orientation of the target object;
a second spatial orientation indicator is displayed in a picture of the virtual three-dimensional scene.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
In this embodiment of the present application, the processor 401 executes the game application program to generate a picture of the virtual three-dimensional scene on the touch display screen 403, where the picture includes a graphical user interface (UI interface), and the graphical user interface includes a second spatial orientation indicator, where a spatial orientation identifier corresponding to the target object is displayed on the second spatial orientation indicator, and the spatial orientation identifier is used to indicate an orientation where the target object is located.
The touch display 403 may be used to present a screen of a virtual three-dimensional scene, and a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to, for example, another computer device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., and will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the accuracy of the method for displaying the direction in the virtual three-dimensional scene can be improved by the computer device provided in this embodiment.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the methods for displaying a direction in a virtual three-dimensional scene provided by embodiments of the present application. For example, the computer program may perform the steps of:
acquiring a space three-dimensional position of a target object in a virtual three-dimensional scene and a space three-dimensional position of a virtual character controlled by a user in the virtual three-dimensional scene;
determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene;
mapping the relative space vector to a preset space coordinate system to obtain a space coordinate corresponding to the end point of the relative space vector in the preset space coordinate system;
Based on the space coordinates, generating a space orientation mark corresponding to the target object on a preset second space orientation indicator to obtain a second space orientation indicator marked with the space orientation of the target object;
a second spatial orientation indicator is displayed in a picture of the virtual three-dimensional scene.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any of the method for displaying a direction in a virtual three-dimensional scene provided in the embodiment of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects that can be achieved by any of the method for displaying a direction in a virtual three-dimensional scene provided in the embodiment of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The above describes in detail a method, an apparatus, a storage medium, and a computer device for displaying an orientation in a virtual three-dimensional scene provided by the embodiments of the present application, and specific examples are applied herein to describe principles and embodiments of the present application, where the description of the above embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (16)

1. The method for displaying the direction in the virtual three-dimensional scene is characterized by comprising the following steps of:
acquiring a space three-dimensional position of a target object in a virtual three-dimensional scene and a space three-dimensional position of a virtual character controlled by a user in the virtual three-dimensional scene, wherein the space three-dimensional position comprises a horizontal position and a vertical position;
determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene;
displaying a first spatial orientation indicator, the first spatial orientation indicator comprising a first spatial orientation identification, the first spatial orientation identification being used at least to indicate the horizontal position;
when the specified triggering instruction is detected to trigger, displaying a second space orientation mark, wherein the second space orientation mark is at least used for indicating the vertical position;
the displaying the second spatial orientation identifier includes:
mapping the relative space vector into a preset space coordinate system to obtain a space coordinate corresponding to an end point of the relative space vector in the preset space coordinate system;
And generating a space azimuth identifier corresponding to the target object on a preset second space azimuth indicator based on the space coordinates to obtain a second space azimuth indicator marking the space azimuth of the target object, wherein the second space azimuth indicator comprises the second space azimuth identifier.
2. The method for displaying a position in a virtual three-dimensional scene according to claim 1, wherein after the step of obtaining the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character manipulated by the user in the virtual three-dimensional scene, the method further comprises:
determining a scene area name corresponding to the spatial three-dimensional position of the target object in the virtual three-dimensional scene according to the spatial three-dimensional position of the target object in the virtual three-dimensional scene;
and displaying the scene area name.
3. The method of displaying an orientation in a virtual three-dimensional scene of claim 1, wherein displaying the second spatial orientation indicator comprises:
displaying an azimuth scale;
acquiring the visual field orientation of the virtual character controlled by the user;
and controlling the second spatial orientation indicator to be displayed on the orientation scale according to the visual field orientation and the relative horizontal vector between the target object and the virtual character, so that the second spatial orientation indicator indicates the horizontal position of the target object relative to the control virtual character in the virtual three-dimensional scene on the orientation scale.
4. The method of orientation display in a virtual three-dimensional scene of claim 1 wherein the second spatial orientation indicator comprises a plurality of second spatial indicator regions, the second spatial orientation indicator displaying a second spatial orientation indicator, further comprising:
determining a second spatial indicator region to which the second spatial orientation indication identification belongs on the second spatial orientation indicator as a visual enhancement region;
modifying color parameters of the visual enhancement region.
5. The method of displaying orientation in a virtual three-dimensional scene of claim 1 wherein the first spatial orientation indicator comprises an orientation wheel;
and determining the position of the first spatial azimuth mark in the azimuth wheel disc according to the horizontal position.
6. The method of claim 5, wherein the horizontal position of the target object in the virtual three-dimensional scene relative to the virtual character manipulated by the user comprises a relative horizontal vector;
the determining the position of the first spatial azimuth mark in the azimuth wheel disc according to the horizontal position comprises the following steps:
mapping the relative horizontal vector into a preset horizontal coordinate system to obtain a horizontal coordinate corresponding to an end point of the relative horizontal vector in the preset horizontal coordinate system;
And determining a first spatial position reference mark of the position of the first spatial position mark in the position wheel disc based on the horizontal coordinate.
7. The method of displaying directions in a virtual three-dimensional scene as recited in claim 5, wherein the first spatial direction indicator comprises a plurality of first spatial indicator regions, the displaying the first spatial direction indicator further comprising:
determining a first spatial indicator region to which the first spatial orientation indication identification belongs on the first spatial orientation indicator as a target region;
determining a preset o' clock direction parameter corresponding to the target area;
the o' clock direction parameter is displayed in the first spatial orientation indicator.
8. The method for displaying a direction in a virtual three-dimensional scene according to claim 1, wherein the specified trigger instruction comprises a conditional trigger instruction;
and displaying a second spatial orientation identifier when detecting triggering in response to a specified triggering instruction, wherein the method comprises the following steps:
calculating the relative distance between the target object and the virtual character according to the space three-dimensional position;
and when the relative distance belongs to a preset distance range, triggering a conditional triggering instruction to display a second space orientation mark.
9. The method for displaying a position in a virtual three-dimensional scene according to claim 8, further comprising, after displaying the second spatial position identifier:
and when the relative distance does not belong to the preset distance range, canceling to display the second space orientation mark.
10. The method for displaying a direction in a virtual three-dimensional scene according to claim 8, wherein when the relative distance belongs to a preset distance range, triggering a conditional triggering instruction to display a second spatial direction identifier, further comprising:
and displaying a preset distance prompt identifier.
11. The method for displaying a direction in a virtual three-dimensional scene according to claim 1, wherein the specified trigger instruction includes an operation trigger instruction;
and displaying a second spatial orientation identifier when detecting triggering in response to a specified triggering instruction, wherein the method comprises the following steps:
and when detecting an operation triggering instruction triggered by a user aiming at the first space orientation indicator, displaying a second space orientation identifier.
12. The method of claim 11, wherein the first spatial orientation indicator comprises an object identification of the target object;
When detecting an operation triggering instruction triggered by a user aiming at the first space orientation indicator, displaying a second space orientation identifier, wherein the operation triggering instruction comprises the following steps:
and when an operation triggering instruction triggered by the user aiming at the object identification is detected, displaying a second space orientation identification.
13. The method for displaying directions in a virtual three-dimensional scene according to claim 12, wherein after displaying the second spatial direction indicator, further comprising:
and when detecting a space orientation hiding instruction triggered by a user aiming at the first space orientation indicator, canceling to display the second space orientation indicator.
14. A directional display device in a virtual three-dimensional scene, comprising:
the virtual character management system comprises an acquisition unit, a virtual character management unit and a virtual character management unit, wherein the acquisition unit is used for acquiring the spatial three-dimensional position of a target object relative to a virtual character controlled by a user in a virtual three-dimensional scene, the spatial three-dimensional position comprises a horizontal position and a vertical position, and the spatial three-dimensional position of the target object relative to the virtual character controlled by the user in the virtual three-dimensional scene comprises a relative spatial vector between the target object and the virtual character;
a first display unit configured to display a first spatial orientation indicator, the first spatial orientation indicator including a first spatial orientation identifier, the first spatial orientation identifier being at least used to indicate the horizontal position;
The second display unit is used for displaying a second space orientation mark when the specified trigger instruction is detected to trigger, and the second space orientation mark is at least used for indicating the vertical position;
the second display unit includes a second indicator subunit for displaying a second spatial orientation indicator including a second spatial orientation identification;
the second indicator subunit comprises:
the mapping sub-module is used for mapping the relative space vector into a preset space coordinate system to obtain a space coordinate corresponding to the end point of the relative space vector in the preset space coordinate system;
the generation sub-module is used for generating a space orientation identifier corresponding to the target object on a preset second space orientation indicator based on the space coordinates to obtain a second space orientation indicator marking the space orientation of the target object;
the acquisition unit includes:
the three-dimensional position sub-module is used for acquiring the spatial three-dimensional position of the target object in the virtual three-dimensional scene and the spatial three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene, wherein the spatial three-dimensional position comprises a horizontal position and a vertical position;
And the vector sub-module is used for determining a relative space vector between the target object and the virtual character according to the space three-dimensional position of the target object in the virtual three-dimensional scene and the space three-dimensional position of the virtual character controlled by the user in the virtual three-dimensional scene.
15. A terminal comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the method for displaying a direction in a virtual three-dimensional scene as claimed in any one of claims 1 to 13.
16. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the method of displaying a direction in a virtual three-dimensional scene according to any one of claims 1 to 13.
CN202010525049.4A 2020-06-10 2020-06-10 Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene Active CN111760288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525049.4A CN111760288B (en) 2020-06-10 2020-06-10 Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525049.4A CN111760288B (en) 2020-06-10 2020-06-10 Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene

Publications (2)

Publication Number Publication Date
CN111760288A CN111760288A (en) 2020-10-13
CN111760288B true CN111760288B (en) 2024-03-12

Family

ID=72720587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525049.4A Active CN111760288B (en) 2020-06-10 2020-06-10 Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene

Country Status (1)

Country Link
CN (1) CN111760288B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631575A (en) * 2020-12-30 2021-04-09 深圳市大富网络技术有限公司 Method, system and device for testing functions of image block and computer storage medium
CN113262490B (en) * 2021-05-06 2024-06-25 网易(杭州)网络有限公司 Virtual object marking method and device, processor and electronic device
CN113144602B (en) * 2021-05-25 2024-04-26 网易(杭州)网络有限公司 Position indication method, position indication device, electronic equipment and storage medium
CN113546419B (en) * 2021-07-30 2024-04-30 网易(杭州)网络有限公司 Game map display method, game map display device, terminal and storage medium
CN113908546A (en) * 2021-09-13 2022-01-11 网易(杭州)网络有限公司 Interactive method and device for determining orientation area and transmitting orientation information in game
CN114832388A (en) * 2022-03-17 2022-08-02 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and storage medium
CN116999806A (en) * 2022-04-29 2023-11-07 腾讯科技(深圳)有限公司 Virtual object display method, device, equipment and storage medium
CN114931752A (en) * 2022-05-23 2022-08-23 网易(杭州)网络有限公司 In-game display method, device, terminal device and storage medium
CN115400429A (en) * 2022-07-20 2022-11-29 网易(杭州)网络有限公司 Display method and device of position information and electronic equipment
CN115317912A (en) * 2022-08-12 2022-11-11 网易(杭州)网络有限公司 Game control method and device, electronic equipment and storage medium
CN117695643A (en) * 2022-09-08 2024-03-15 腾讯科技(深圳)有限公司 Method and device for displaying azimuth prompt information, storage medium and electronic equipment
CN115761122B (en) * 2022-11-11 2023-07-14 贝壳找房(北京)科技有限公司 Method, device, equipment and medium for realizing three-dimensional auxiliary ruler

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108465240A (en) * 2018-03-22 2018-08-31 腾讯科技(深圳)有限公司 Mark point position display method, device, terminal and computer readable storage medium
CN109966738A (en) * 2019-02-22 2019-07-05 网易(杭州)网络有限公司 Information processing method, processing unit, electronic equipment and storage medium
CN111185004A (en) * 2019-12-30 2020-05-22 网易(杭州)网络有限公司 Game control display method, electronic device, and storage medium
US10661172B2 (en) * 2017-09-30 2020-05-26 Netease (Hangzhou) Networks Co., Ltd. Visual display method and apparatus for compensating sound information, storage medium and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10807001B2 (en) * 2017-09-12 2020-10-20 Netease (Hangzhou) Network Co., Ltd. Information processing method, apparatus and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10661172B2 (en) * 2017-09-30 2020-05-26 Netease (Hangzhou) Networks Co., Ltd. Visual display method and apparatus for compensating sound information, storage medium and device
CN108465240A (en) * 2018-03-22 2018-08-31 腾讯科技(深圳)有限公司 Mark point position display method, device, terminal and computer readable storage medium
CN109966738A (en) * 2019-02-22 2019-07-05 网易(杭州)网络有限公司 Information processing method, processing unit, electronic equipment and storage medium
CN111185004A (en) * 2019-12-30 2020-05-22 网易(杭州)网络有限公司 Game control display method, electronic device, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张艺江 ; 秦学英 ; Julien Pettré ; 彭群生 ; .虚拟群体与动态视频场景的在线实时融合.计算机辅助设计与图形学学报.2011,(01),全文. *

Also Published As

Publication number Publication date
CN111760288A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111760288B (en) Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene
US11221726B2 (en) Marker point location display method, electronic device, and computer-readable storage medium
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
WO2019153750A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
WO2019153824A1 (en) Virtual object control method, device, computer apparatus, and storage medium
US20190126151A1 (en) Visual display method for compensating sound information, computer readable storage medium and electronic device
JP2022527686A (en) Shadow rendering methods, devices, computer devices and computer programs
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN113082707B (en) Virtual object prompting method and device, storage medium and computer equipment
US20160266661A1 (en) Spatial motion-based user interactivity
WO2022257690A1 (en) Method and apparatus for marking article in virtual environment, and device and storage medium
CN113487662B (en) Picture display method and device, electronic equipment and storage medium
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN111124128A (en) Position prompting method and related product
CN115920385A (en) Game signal feedback method and device, electronic equipment and readable storage medium
CN113101664B (en) Path finding indication method, device, terminal and storage medium
CN113350792B (en) Contour processing method and device for virtual model, computer equipment and storage medium
CN114404953A (en) Virtual model processing method and device, computer equipment and storage medium
CN115193042A (en) Display control method, display control device, electronic equipment and storage medium
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN111445439B (en) Image analysis method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230109

Address after: Room 2075, Zone A, Floor 2, No. 2, Lane 99, Jiajie Road, Zhaoxiang Town, Qingpu District, Shanghai, 200000

Applicant after: Netease (Shanghai) Network Co.,Ltd.

Address before: 310052 Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province, 4, 7 stories

Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant