CN113082707B - Virtual object prompting method and device, storage medium and computer equipment - Google Patents

Virtual object prompting method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN113082707B
CN113082707B CN202110351275.XA CN202110351275A CN113082707B CN 113082707 B CN113082707 B CN 113082707B CN 202110351275 A CN202110351275 A CN 202110351275A CN 113082707 B CN113082707 B CN 113082707B
Authority
CN
China
Prior art keywords
target
virtual object
sub
detection area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110351275.XA
Other languages
Chinese (zh)
Other versions
CN113082707A (en
Inventor
吴佳波
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110351275.XA priority Critical patent/CN113082707B/en
Publication of CN113082707A publication Critical patent/CN113082707A/en
Application granted granted Critical
Publication of CN113082707B publication Critical patent/CN113082707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a prompting method, a prompting device, a storage medium and computer equipment of a virtual object, wherein the method comprises the following steps: displaying a user interface, wherein the user interface comprises an environment picture presented when a first virtual object observes a three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first position and the second position; and displaying prompt information containing the target azimuth in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth. The prompt information is displayed in the partial display area, so that the user is prevented from missing the prompt information, and the prompt effect of the prompt information is improved.

Description

Virtual object prompting method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for prompting a virtual object, a computer readable storage medium, and a computer device.
Background
In recent years, with development and popularization of computer equipment technology, more and more applications having three-dimensional virtual environments, such as: virtual reality applications, three-dimensional map programs, military simulation programs, first person shooter games (First person shooting game, FPS), multiplayer online tactical athletic games (Multiplayer Online Battle Arena Games, MOBA), etc.
In the prior art, taking FPS game as an example, when other virtual objects emitting sounds appear around a virtual object controlled by a user, the identifiers of the other virtual objects are displayed on a small map in a user interface, so as to prompt the user to observe the positions of the other virtual objects.
Disclosure of Invention
In the research and practice process of the prior art, the inventor of the application finds that, in the prior art, as the display area of the small map is usually smaller and the small map is located at the edge position of the user interface, the user needs to pay attention to the position of the center of the user interface, usually the user can miss to view the identifiers corresponding to other virtual objects on the small map, so that other virtual objects attack players and are found.
The embodiment of the application provides a prompting method and a prompting device for a virtual object, which can avoid a player from missing prompting information and improve the prompting effect of the prompting information.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
a method of prompting a virtual object, comprising:
displaying a user interface, wherein the user interface comprises an environment picture presented when a first virtual object observes a three-dimensional virtual environment;
acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to sound made by the second virtual object in the three-dimensional virtual environment;
determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location;
and displaying prompt information containing the target azimuth in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth.
A hinting apparatus for a virtual object, comprising:
the display module is used for displaying a user interface, wherein the user interface comprises an environment picture which is presented when a first virtual object observes a three-dimensional virtual environment;
a first obtaining module, configured to obtain a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to a sound emitted by the second virtual object in the three-dimensional virtual environment;
A first determining module configured to determine a target position of the second virtual object relative to the first virtual object based on the first location and the second location;
and the prompt module is used for displaying prompt information containing the target azimuth in a part of display area of the user interface, and the prompt information comprises text description of the target azimuth.
In some embodiments, the first determining module includes:
the first determining submodule is used for determining an area in a preset range as a detection area by taking the first position as a center;
the sub-dividing module is used for dividing the detection area according to a preset angle to obtain a plurality of sub-detection areas;
and a second determination sub-module configured to determine a target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location.
In some embodiments, the second determining sub-module comprises:
the first identification unit is used for identifying each sub-detection area to generate a first mark so as to obtain a plurality of sub-detection areas with different first marks;
a first determining unit, configured to determine, when it is detected that a second position of the second virtual object is in the detection area, a sub-detection area in which the second position is located as a target sub-detection area;
And the second determining unit is used for determining a first target mark corresponding to the target sub-detection area from the first marks and determining the sub-detection area corresponding to the first target mark as a target azimuth.
In some embodiments, the second determining sub-module comprises:
the second identification unit is used for identifying the directions between adjacent sub-detection areas in the plurality of sub-detection areas to generate a second mark so as to obtain a plurality of candidate directions with different second marks;
a judging unit configured to judge whether a second position of the second virtual object is located in one of the plurality of candidate directions when the second position is detected to be located in the detection area;
a third determining unit configured to determine, as a target direction, a candidate direction in which the second position is located if the second position is located in one of the plurality of candidate directions;
and a fourth determining unit, configured to determine a second target mark corresponding to the target direction from the second marks, and determine the target direction of the second target mark as a target azimuth.
In some embodiments, the second determination submodule further includes:
A first obtaining unit, configured to obtain a sub-detection area in which the second location is located if the second location is not located in one candidate direction of the plurality of candidate directions;
the dividing unit is used for equally dividing the sub-detection area where the second position is located into a first sub-detection area and a second sub-detection area;
a second obtaining unit, configured to obtain a first candidate direction adjacent to the first sub-detection area, and obtain a second candidate direction adjacent to the second sub-detection area;
and a fifth determining unit configured to determine the target direction from the first candidate direction and the second candidate direction based on a positional relationship between the second position and the first sub-detection area or a positional relationship between the second position and the second sub-detection area, and return to perform the step of determining a second target mark corresponding to the target direction from the second marks.
In some embodiments, the fifth determining unit is configured to:
if the second position is located in the first sub-detection area, determining the first candidate direction as the target direction;
and if the second position is located in the second sub-detection area, determining the second candidate direction as the target direction.
In some embodiments, the apparatus further comprises:
the second determining module is used for determining a target sound source type corresponding to the sound emitted by the second virtual object according to a preset mapping relation;
the prompt module is further configured to display, in a partial display area of the user interface, prompt information including the target azimuth and the target sound source type, where the prompt information includes a text description of the target azimuth and the target sound source type.
In some embodiments, the apparatus further comprises:
a third determining module for determining a target distance value between the first location and the second location;
the prompt module is further configured to display, in a partial display area of the user interface, prompt information including the target azimuth, the target sound source type, and the target distance value, where the prompt information includes text descriptions of the target azimuth, the target sound source type, and the target distance value.
In some embodiments, the apparatus further comprises:
a fourth determining module, configured to determine whether the second virtual object is a virtual weapon according to the target sound source type;
a fifth determining module, configured to determine, if the second virtual object is a virtual weapon, a target virtual weapon corresponding to the target sound source type;
The second acquisition module is used for acquiring a first attack distance value of the target virtual weapon and acquiring a second attack distance value corresponding to the virtual weapon in the first virtual object;
and the first control module is used for controlling the first virtual object to move according to the first attack distance value, the second attack distance value and the target distance value.
In some embodiments, the first control module comprises:
the judging submodule is used for judging whether the target distance value is larger than the second attack distance value or not when the second attack distance value is larger than the first attack distance value;
and the control sub-module is used for controlling the first virtual object to move towards the direction close to the second virtual object if the target distance value is larger than the second attack distance value until the distance between the second virtual object and the first virtual object is the second attack distance value.
In some embodiments, the apparatus further comprises:
the generating module is used for generating a target control and judging whether the target control receives an operation instruction in a preset time;
and the second control module is used for controlling the first virtual object to move along the direction opposite to the target azimuth if the target control receives the operation instruction within the preset time.
In some embodiments, the prompting module includes:
and the broadcasting sub-module is used for displaying prompt information containing the target azimuth in a part of display areas of the user interface and broadcasting the prompt information in a voice way.
A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the method of prompting a virtual object described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps in the method of prompting a virtual object as described above when the program is executed.
The method comprises the steps that a user interface is displayed, wherein the user interface comprises an environment picture which is presented when a first virtual object observes a three-dimensional virtual environment; acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to sound made by the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target azimuth in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth. Therefore, the prompt information is displayed in the partial display area, so that the user is prevented from missing the prompt information, and the prompt effect of the prompt information is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic system diagram of a virtual object prompting system according to an embodiment of the present application.
Fig. 1b is a schematic flow chart of a first method for prompting a virtual object according to an embodiment of the present application.
Fig. 1c is a schematic diagram of a first user interface according to an embodiment of the present application.
Fig. 1d is a schematic diagram of a world coordinate system in a three-dimensional virtual environment according to an embodiment of the present application.
Fig. 1e is a first schematic diagram of a detection area according to an embodiment of the present application.
Fig. 1f is a second schematic diagram of a detection area according to an embodiment of the present application.
Fig. 1g is a third schematic diagram of a detection area according to an embodiment of the present application.
Fig. 1h is a schematic diagram of a second user interface according to an embodiment of the present application.
Fig. 1i is a schematic diagram of a third user interface according to an embodiment of the present application.
Fig. 1j is a schematic diagram of a fourth user interface according to an embodiment of the present application.
Fig. 1k is a schematic diagram of controlling movement of a first virtual object according to an embodiment of the present application.
Fig. 2a is a second flowchart of a method for prompting a virtual object according to an embodiment of the present application.
Fig. 2b is a third flowchart of a method for prompting a virtual object according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a device for prompting a virtual object according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a prompting method and device for virtual objects, a storage medium and computer equipment. Specifically, the method for prompting the virtual object in the embodiment of the application may be executed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), etc., and the terminal device may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, when the prompting method of the virtual object is run on the terminal, the terminal device stores a game application program and presents a part of game scenes in the game through the display component. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a game application program and runs the game application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including game screens and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the game, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the method of prompting for the virtual object is running on a server, it may be a cloud game. Cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game application program and a game picture presentation main body are separated, and the storage and the running of the prompting method of the virtual object are completed on the cloud game server. The game image presentation is completed at a cloud game client, which is mainly used for receiving and sending game data and presenting game images, for example, the cloud game client may be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but a terminal device for prompting a game virtual object is a cloud game server in the cloud. When playing the game, the user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
Referring to fig. 1a, fig. 1a is a schematic system diagram of a virtual object prompting device according to an embodiment of the present application. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. Terminal 1000 held by a user may be connected to servers of different games through network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing software products corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining input from a user through touch or slide operations performed at multiple points of one or more touch sensitive display screens. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000, through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals 1000 may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals 1000 so as to be connected via an appropriate network and synchronized with each other to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 while different users play the multiplayer game online.
The embodiment of the application provides a prompting method of a virtual object, which can be executed by a terminal or a server. The embodiment of the application is described by taking the method for prompting the virtual object as an example executed by the terminal. The terminal comprises a display component and a processor, wherein the display component is used for presenting a graphical user interface and receiving operation instructions generated by a user acting on the display component. When a user operates the graphical user interface through the display component, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the user-generated operational instructions for the graphical user interface include instructions for launching the gaming application, and the processor is configured to launch the gaming application after receiving the user-provided instructions for launching the gaming application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch-sensitive display screen. A touch display screen is a multi-touch-sensitive screen capable of sensing touch or slide operations performed simultaneously by a plurality of points on the screen. The user performs touch operation on the graphical user interface by using a finger, and when the graphical user interface detects the touch operation, the graphical user interface controls different virtual objects in the graphical user interface of the game to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role playing game, a strategy game, a sports game, an educational game, a first person shooter game (First person shooting game, FPS), and the like. Wherein the game may comprise a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by a user (or player) may be included in the virtual scene of the game. In addition, one or more obstacles, such as rails, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual object, e.g., to limit movement of the one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, scores, character health status, energy, etc., to provide assistance to the player, provide virtual services, increase scores related to the player's performance, etc. In addition, the graphical user interface may also present one or more indicators to provide indication information to the player. For example, a game may include a player controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using an Artificial Intelligence (AI) algorithm, implementing a human-machine engagement mode. For example, virtual objects possess various skills or capabilities that a game player uses to achieve a goal. For example, the virtual object may possess one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by the player of the game using one of a plurality of preset touch operations with the touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of the user.
It should be noted that, the system schematic diagram of the virtual object prompting system shown in fig. 1a is only an example, and the virtual object prompting system and the scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided by the embodiments of the present application, and as one of ordinary skill in the art can know, along with the evolution of the virtual object prompting system and the appearance of the new service scenario, the technical solution provided by the embodiments of the present application is also applicable to similar technical problems.
In this embodiment, description will be made in terms of a presentation device of a virtual object, which may be integrated in a computer device having a storage unit and a microprocessor mounted thereon and having arithmetic capability.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a first method for prompting a virtual object according to an embodiment of the present application. The prompting method of the virtual object comprises the following steps:
in step 101, a user interface is displayed, the user interface comprising an environment screen presented when the first virtual object observes the three-dimensional virtual environment.
The user interface displays a three-dimensional virtual environment in the game scene, wherein the three-dimensional virtual environment is a virtual environment provided when an application program runs on the terminal, and can be a simulation environment for the real world, a semi-simulation and semi-fiction environment or a pure fiction environment. The first virtual object, i.e. a virtual character that a user is controlled in a game scene through a terminal, and the virtual object views a three-dimensional virtual environment through a camera model, for example, an FPS game, where the camera model is located at the head or neck of the first virtual object when at a first person viewing angle and where the camera model is located at the rear of the first virtual object when at a third person viewing angle. The user interface is an environment picture presented by observing the three-dimensional virtual environment through the camera model at a certain view angle.
Specifically, referring to fig. 1c, fig. 1c is a schematic diagram of a first user interface according to an embodiment of the present application. The user interface is presented by a screen of the client, and comprises a first virtual object 100 controlled by a user, an attack control 200 for controlling the first virtual object 100 to perform attack operation in the three-dimensional virtual environment, a cursor control 300 for prompting the current direction information of the first virtual object 100 of the user, a mobile control 400 for controlling the first virtual object 100 to move in the three-dimensional virtual environment, a sighting control 500 which can be used by the first virtual object 100 when in attack, a map control 600 for prompting the position of the first virtual object 100 in the three-dimensional virtual environment of the user and the like. An indication control 301 is further disposed in the direction control 300, and is used for indicating the direction of the first virtual object 100 in the cursor control 300.
In step 102, a first location of a first virtual object in the three-dimensional virtual environment and a second location of a second virtual object in the three-dimensional virtual environment are acquired in response to sound made by the second virtual object in the three-dimensional virtual environment.
Referring to fig. 1d, fig. 1d is a schematic diagram of a world coordinate system in a three-dimensional virtual environment according to an embodiment of the present application. The three-dimensional virtual environment has a world coordinate system constructed by an X-axis, a Y-axis and a Z-axis, so that a first virtual object located in the three-dimensional virtual environment has its corresponding coordinates (X1, Y1, Z1) and a second virtual object manipulated by other users also has its corresponding coordinates (X2, Y2, Z2).
Specifically, the first position may be a first coordinate corresponding to the first virtual object in the world coordinate system, and the second position may be a second coordinate corresponding to the second virtual object in the world coordinate system.
When each computer device runs a game, position information (such as coordinates) of the virtual object is sent to the server at intervals, a first coordinate corresponding to the first virtual object in the world coordinate system can be directly obtained in the computer device of the user, and a second coordinate corresponding to the second virtual object in the world coordinate system is needed to be sent to the computer device for controlling the first virtual object through the server.
In step 103, a target position of the second virtual object relative to the first virtual object is determined based on the first position and the second position.
The target azimuth specifically comprises a direction of the second virtual object relative to the first virtual object or an area where the second virtual object is located relative to the first virtual object. Regardless of the manner in which the target position is determined, the target position of the second virtual object relative to the first virtual object may be determined by the first position and the second position acquired in step 102.
In some embodiments, the step of determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location comprises:
(1) Determining an area within a preset range as a detection area by taking the first position as a center;
(2) Dividing the detection area according to a preset angle to obtain a plurality of sub detection areas;
(3) A target position of the second virtual object relative to the first virtual object is determined based on the plurality of sub-detection areas and the second location.
The developer sets a detection area for the avatar, i.e. an area within a preset range with the avatar as the center, because the target azimuth of the second avatar with respect to the first avatar is determined. When other virtual objects (such as a second virtual object) are in the detection area, the other virtual objects can be detected by the virtual object (such as a first virtual object) and are prompted on the user interface.
Specifically, referring to fig. 1e, fig. 1e is a first schematic diagram of a detection area according to an embodiment of the present application. The detection area may be divided into a plurality of sub-detection areas including sub-detection area 111, sub-detection area 112, sub-detection area 113, sub-detection area 114, sub-detection area 115, sub-detection area 116, sub-detection area 117, and sub-detection area 118. The sub-regions may be divided equally or randomly, and are not limited herein. And determining the target azimuth of the second virtual object relative to the first virtual object according to the second position and the divided multiple sub-detection areas.
In some embodiments, the step of determining the target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location comprises:
(1.1) marking each sub-detection area to generate a first mark so as to obtain a plurality of sub-detection areas with different first marks;
(1.2) when the second position of the second virtual object is detected to be in the detection area, determining a sub-detection area in which the second position is located as a target sub-detection area;
and (1.3) determining a first target mark corresponding to the target sub-detection area from the first marks, and determining the sub-detection area corresponding to the first target mark as a target azimuth.
Wherein the manner in which the target orientation is determined is different for different target orientations (directions and regions).
Specifically, when the area is taken as the target azimuth, each sub-detection area can be identified, so that each sub-detection area corresponds to a first mark which is different from other sub-detection areas. And determining whether the second position is located in the detection area of the first virtual object, if so, determining the sub-detection area in which the second position is located, and determining the area as a target sub-detection area. And determining a first mark corresponding to the target sub-detection area as a first target mark, and determining the sub-detection area corresponding to the first target mark as a target azimuth.
For example, the sub-detection region 111, the sub-detection region 112, the sub-detection region 113, the sub-detection region 114, the sub-detection region 115, the sub-detection region 116, the sub-detection region 117, and the sub-detection region 118 are respectively identified such that the first mark corresponding to the sub-detection region 111 is a, the first mark corresponding to the sub-detection region 112 is B, the first mark corresponding to the sub-detection region 113 is C, the first mark corresponding to the sub-detection region 114 is D, the first mark corresponding to the sub-detection region 115 is E, the first mark corresponding to the sub-detection region 116 is F, the first mark corresponding to the sub-detection region 117 is G, and the first mark corresponding to the sub-detection region 118 is H. If the second position is in the sub-detection area 113, the sub-detection area 113 is determined as a target sub-detection area, a first mark C corresponding to the target sub-detection area is acquired, the first mark C is determined as a first target mark, and the sub-detection area 113 corresponding to the first target mark is determined as a target azimuth.
In some embodiments, the step of determining the target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location comprises:
(1.1) identifying directions between adjacent sub-detection areas in the plurality of sub-detection areas to generate a second mark so as to obtain a plurality of candidate directions with different second marks;
(1.2) when the second position of the second virtual object is detected to be in the detection area, judging whether the second position is located in one candidate direction of the plurality of candidate directions;
(1.3) if the second location is located in one of the plurality of candidate directions, determining the candidate direction in which the second location is located as a target direction;
(1.4) determining a second target mark corresponding to the target direction from the second marks, and determining the target direction of the second target mark as the target azimuth.
When the direction is taken as the target azimuth, the candidate directions between the adjacent sub-detection areas in the divided sub-detection areas can be identified, so that the candidate directions between the adjacent sub-detection areas are corresponding to second marks which are different from other candidate directions. And determining whether the second position is located in one of the candidate directions, if so, determining the candidate direction in which the second position is located, and determining the candidate direction as a target direction. And determining a second mark corresponding to the target direction as a second target mark, and determining the target direction of the second target mark as a target azimuth.
For example, as shown in fig. 1f, fig. 1f is a second schematic diagram of a detection area provided in the embodiment of the present application, and in fig. 1f, a candidate direction 211 between the sub-detection area 111 and the sub-detection area 112, a candidate direction 212 between the sub-detection area 112 and the sub-detection area 113, a candidate direction 213 between the sub-detection area 113 and the detection area 114, a candidate direction 214 between the sub-detection area 114 and the sub-detection area 115, a candidate direction 215 between the sub-detection area 115 and the sub-detection area 116, a candidate direction 216 between the sub-detection area 116 and the sub-detection area 117, a candidate direction 217 between the sub-detection area 117 and the sub-detection area 118, and a candidate direction 218 between the sub-detection area 118 and the sub-detection area 111 are marked. And each candidate direction is identified, so as to obtain a second mark corresponding to the candidate direction 211 as a, a second mark corresponding to the candidate direction 212 as b, a second mark corresponding to the candidate direction 213 as c, a second mark corresponding to the candidate direction 214 as d, a second mark corresponding to the candidate direction 215 as e, a second mark corresponding to the candidate direction 216 as f, a second mark corresponding to the candidate direction 217 as g, and a second mark corresponding to the candidate direction 218 as h. When the second position is located in the candidate direction 214, the candidate direction 214 is determined to be a target direction, a second target mark corresponding to the candidate direction 214 is determined to be d, and the target direction of the second target mark is determined to be a target azimuth.
Specifically, each candidate direction may be further identified by a clock dial, for example, the detection area is divided by 12 equal parts to obtain 12 candidate directions, and each candidate direction is sequentially identified to obtain a 1 o 'clock direction, a 2 o' clock direction, a 3 o 'clock direction, a 4 o' clock direction, a 5 o 'clock direction, a 6 o' clock direction, a 7 o 'clock direction, an 8 o' clock direction, a 9 o 'clock direction, a 10 o' clock direction, an 11 o 'clock direction, and a 12 o' clock direction, where a specific identification manner is not limited herein.
In some embodiments, the method further comprises:
(1.1) if the second position is not located in one candidate direction of the plurality of candidate directions, acquiring a sub-detection area in which the second position is located;
(1.2) dividing the sub-detection area where the second position is located into a first sub-detection area and a second sub-detection area on average;
(1.3) acquiring a first candidate direction adjacent to the first sub-detection area, and acquiring a second candidate direction adjacent to the second sub-detection area;
(1.4) determining the target direction from the first candidate direction and the second candidate direction based on the positional relationship between the second position and the first sub-detection area or the positional relationship between the second position and the second sub-detection area, and returning to execute the step of determining a second target mark corresponding to the target direction from the second marks.
When the direction is taken as the target azimuth, there is a situation that the second position is not located in the candidate direction, so that a rule needs to be set, for example, when the second position is located in the sub-detection area, the sub-detection area can be divided averagely, so that a divided first sub-detection area and second sub-detection area are obtained, a first candidate direction adjacent to the first sub-detection area and a second candidate direction adjacent to the second sub-detection area are correspondingly obtained, and the positional relationship between the second position and the first sub-detection area and the positional relationship between the second position and the second sub-detection area are determined. And determining the target direction from the first candidate direction and the second candidate direction according to the position relation between the second position and the first sub-detection area or the position relation between the second position and the second sub-detection area, and returning to the step of executing the second mark to determine the second target mark corresponding to the target direction.
In some embodiments, the step of determining the target direction from the first candidate direction and the second candidate direction based on the positional relationship of the second location and the first sub-detection area or the positional relationship of the second location and the second sub-detection area includes:
(1.1) if the second location is in the first sub-detection zone, determining the first candidate direction as the target direction;
(1.2) if the second location is in the second sub-detection zone, determining the second candidate direction as the target direction.
If the second position is located in the first sub-detection area, the second position is proved to be close to the first candidate direction, so that the first candidate direction is determined to be the target direction; if the second position is located in the second sub-detection area, the second position is proved to be close to the second candidate direction, so that the second candidate direction is determined to be the target direction.
For example, as shown in fig. 1g, fig. 1g is a third schematic diagram of a detection area provided in an embodiment of the present application. When the second position is in the sub-detection area 112, the sub-detection area 112 is divided equally, so that a first sub-detection area 1121 and a second sub-detection area 1122 are obtained, and if the second position is in the second sub-detection area 1122, the candidate direction 212 adjacent to the second sub-detection area 1122 is determined as the target direction, and the step of determining a second target mark corresponding to the target direction from the second marks is performed back.
In step 104, a prompt containing a target position is displayed in a portion of the display area of the user interface, including a textual description of the target position.
After determining the target azimuth, generating a prompt message according to the target azimuth, displaying the prompt message in a part of a display area of a user interface, and broadcasting the prompt message in a voice broadcasting mode. The hint information includes a textual description of the target position that may be used to indicate to a user that a second virtual object exists at the target position.
Specifically, referring to fig. 1h, fig. 1h is a schematic diagram of a second user interface according to an embodiment of the present application. Fig. 1h illustrates an example in which the direction is the target azimuth and the indication of the candidate direction is indicated by way of a timepiece dial. The prompt information of the character of six o' clock direction is displayed in the partial display area of the user interface, which is located at the upper center, so that the user can more intuitively see the prompt information. The partial display area is not limited to being located at a position above the center of the user interface, but may be located at other positions in the user interface, and is not limited herein.
In some embodiments, the step of displaying a prompt including the target azimuth in a portion of a display area of the user interface includes:
(1) And displaying prompt information containing the target azimuth in a part of display area of the user interface, and broadcasting the prompt information in a voice way.
The prompt information can be broadcast, so that the user is prevented from missing to watch the prompt information, and the prompt effect is improved.
In some embodiments, after the step of obtaining the first location of the first virtual object in the three-dimensional virtual environment and the second location of the second virtual object in the three-dimensional virtual environment, further comprising:
(1) Determining a target sound source type corresponding to sound emitted by the second virtual object according to a preset mapping relation;
the step of displaying a prompt message including the target position in a portion of a display area of the user interface, the prompt message including a text description of the target position, includes:
(2) And displaying prompt information containing the target azimuth and the target sound source type in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth and the target sound source type.
The virtual object may be any one of a virtual carrier, a virtual character and a virtual weapon, so in order to help the user determine which of the second virtual objects is in the target direction, a mapping relationship between the virtual object and the sound source type may be established in advance, for example, a footstep sound is emitted by the virtual character, an engine sound is emitted by the virtual carrier, and a gun sound is emitted by the virtual weapon. Therefore, the target sound source type corresponding to the sound emitted by the second virtual object can be determined according to the preset mapping relation. Thus, when the prompt message is generated, the target sound source type can be included in addition to the target azimuth, and the prompt message is displayed on the user interface.
Specifically, as shown in fig. 1i, fig. 1i is a schematic diagram of a third user interface provided in an embodiment of the present application. As can be seen from comparison of FIG. 1h, the prompt message in FIG. 1i further includes a target sound source type (step sound) corresponding to the second virtual object, so that the sound source type is presented in a text description manner by identifying the sound source type of the sound, and further, the user is assisted in determining what object is the second virtual object in the target azimuth, so that the player can select an optimal combat mode according to the target azimuth and the sound source type, for example, when the sound source type is an engine sound emitted by the automobile, the player can select a rocket tube or rifle to attack the automobile.
In some embodiments, after the step of determining the target sound source type corresponding to the sound emitted by the second virtual object according to the preset mapping relationship, the method further includes:
(1.1) determining a target distance value between the first location and the second location;
the step of displaying a prompt message including the target azimuth in a partial display area of the user interface, the prompt message including the target azimuth and a text description of the target audio source type, includes:
(1.2) displaying, in a portion of the display area of the user interface, a hint information comprising the target bearing, the target sound source type, and the target distance value, the hint information comprising a textual description of the target bearing, the target sound source type, and the target distance value.
The target distance value between the first position and the second position can be obtained, so that when the prompt information is generated, the target distance value can be included in addition to the target azimuth and the target sound source type and displayed on the user interface.
Specifically, as shown in fig. 1j, fig. 1j is a schematic diagram of a fourth user interface provided in an embodiment of the present application. As can be seen from comparison with FIG. 1i, the hint information of FIG. 1j further includes a target distance value (50 meters), so that the user is assisted in determining the distance between the second virtual object and the first virtual object in the target bearing by the text description of the target distance value in the hint information. The player can select the best combat mode according to the target azimuth, the sound source type and the target distance value, for example, when the sound source type is engine sound emitted by the automobile, the player can select the rocket or rifle to attack the automobile, and when the target distance value between the automobile and the player is larger, the player can adopt the rocket to conduct remote attack; when the target distance value between the automobile and the player is small, the player can use the rifle to conduct short-range attack.
In some embodiments, after the step of displaying the prompt information including the target azimuth, the target sound source type and the distance value in the partial display area of the user interface, the method further includes:
(1) Determining whether the second virtual object is a virtual weapon according to the target sound source type;
(2) If the second virtual object is a virtual weapon, determining a target virtual weapon corresponding to the target sound source type;
(3) Acquiring a first attack distance value of the target virtual weapon and acquiring a second attack distance value corresponding to the virtual weapon in the first virtual object;
(4) And controlling the first virtual object to move according to the first attack distance value, the second attack distance value and the target distance value.
And determining whether the second virtual object is a virtual weapon according to the target sound source type, and if so, determining that the target sound source type is emitted by the weapon according to the mapping relation (for example, AK-47 or 98K). In game settings, the effective attack distance values for different weapons are different, for example AK-47 has an attack distance value of 500 meters and 98K has an attack distance value of 1 km. Therefore, a first attack distance value of the target virtual weapon and a second attack distance value corresponding to the virtual weapon in the first virtual object can be obtained, and the first virtual object is controlled to move according to the first attack distance value, the second attack distance value and the target distance value.
In some embodiments, the step of controlling the movement of the first virtual object according to the first attack distance value, the second attack distance value, and the target distance value includes:
(1.1) when the second attack distance value is greater than the first attack distance value, determining whether the target distance value is greater than the second attack distance value;
and (1.2) if the target distance value is greater than the second attack distance value, controlling the first virtual object to move towards the direction close to the second virtual object until the distance between the second virtual object and the first virtual object is the second attack distance value.
Referring to fig. 1k, fig. 1k is a schematic diagram of controlling movement of a first virtual object according to an embodiment of the present application. In fig. 1k, the second virtual object is located at the X point, the first attack distance value of the target virtual weapon in the second virtual object is L1, the first virtual object is located at the Y point, and the second attack distance value of the virtual weapon in the first virtual object is L2, so that the first virtual object can be controlled to move in a direction close to the second virtual object until the O point, where the O point can be seen to be a position where the first virtual object can just cause an effective attack on the second virtual object, and the second virtual object cannot actually cause an effective attack on the first virtual object.
In some embodiments, after displaying the prompt information including the target azimuth in a part of the display area of the user interface and broadcasting the prompt information by voice, the method further includes:
(1) Generating a target control, and judging whether the target control receives an operation instruction within a preset time;
(2) And if the target control receives the operation instruction within the preset time, controlling the first virtual object to move along the direction opposite to the target azimuth.
The user can also decide whether to control the first virtual object to attack the second virtual object, so that a target control can be generated, and the target control is used for moving in a direction away from the second virtual object after receiving an operation instruction of the user on the target control within a preset time. The operation instruction may be, but not limited to, clicking the target control by the user, or sliding on the target control.
In some embodiments, the method further comprises:
(1) And if the target control does not receive the operation instruction within the preset time, prohibiting the first virtual object from moving in the direction opposite to the target azimuth.
And if the target control does not receive the operation instruction within the preset time, proving that the user does not want to be far away from the second virtual object, and prohibiting the first virtual object from moving in the direction opposite to the target direction.
As can be seen from the foregoing, in the embodiment of the present application, a user interface is displayed, where the user interface includes an environment screen that is presented when a first virtual object observes a three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target azimuth in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth. Therefore, the prompt information is prevented from being missed by the user in a mode of displaying the prompt information in a part of the display area, and the prompt effect of the prompt information is improved.
The methods described in connection with the above embodiments are described in further detail below by way of example.
Referring to fig. 2a, fig. 2a is a second flowchart of a method for prompting a virtual object according to an embodiment of the present application. The method flow may include:
in step 301, a computer device displays a user interface including an environment screen presented when a first virtual object observes a three-dimensional virtual environment.
The user interface displays a three-dimensional virtual environment in the game scene, wherein the three-dimensional virtual environment is a virtual environment provided when an application program runs on the terminal, and can be a simulation environment for the real world, a semi-simulation and semi-fiction environment or a pure fiction environment. The first virtual object, i.e. a virtual character that a user is controlled in a game scene through a terminal, and the virtual object views a three-dimensional virtual environment through a camera model, for example, an FPS game, where the camera model is located at the head or neck of the first virtual object when at a first person viewing angle and where the camera model is located at the rear of the first virtual object when at a third person viewing angle. The user interface is an environment picture presented by observing the three-dimensional virtual environment through the camera model at a certain view angle.
Specifically, referring to fig. 1c, fig. 1c is a schematic diagram of a first user interface according to an embodiment of the present application. The user interface is presented by a screen of the client, and comprises a first virtual object 100 controlled by a user, an attack control 200 for controlling the first virtual object 100 to perform attack operation in the three-dimensional virtual environment, a cursor control 300 for prompting the current direction information of the first virtual object 100 of the user, a mobile control 400 for controlling the first virtual object 100 to move in the three-dimensional virtual environment, a sighting control 500 which can be used by the first virtual object 100 when in attack, a map control 600 for prompting the position of the first virtual object 100 in the three-dimensional virtual environment of the user and the like. An indication control 301 is further disposed in the direction control 300, and is used for indicating the direction of the first virtual object 100 in the cursor control 300.
In step 302, the computer device obtains a first location of a first virtual object in the three-dimensional virtual environment and a second location of a second virtual object in the three-dimensional virtual environment in response to sound made by the second virtual object in the three-dimensional virtual environment.
Referring to fig. 1d, fig. 1d is a schematic diagram of a world coordinate system in a three-dimensional virtual environment according to an embodiment of the present application. The three-dimensional virtual environment has a world coordinate system constructed by an X-axis, a Y-axis and a Z-axis, so that a first virtual object located in the three-dimensional virtual environment has its corresponding coordinates (X1, Y1, Z1) and a second virtual object manipulated by other users also has its corresponding coordinates (X2, Y2, Z2).
Specifically, the first position may be a first coordinate corresponding to the first virtual object in the world coordinate system, and the second position may be a second coordinate corresponding to the second virtual object in the world coordinate system.
When each computer device runs a game, position information (such as coordinates) of the virtual object is sent to the server at intervals, a first coordinate corresponding to the first virtual object in the world coordinate system can be directly obtained in the computer device of the user, and a second coordinate corresponding to the second virtual object in the world coordinate system is needed to be sent to the computer device for controlling the first virtual object through the server.
In step 303, the computer device determines an area within a preset range centered on the first location as a detection area.
The target azimuth specifically comprises a direction of the second virtual object relative to the first virtual object or an area where the second virtual object is located relative to the first virtual object. Whichever way the target position is determined, the target position of the second virtual object relative to the first virtual object may be determined by the first position and the second position obtained in step 302.
Specifically, since the target position of the second virtual object with respect to the first virtual object is determined, the first virtual object is required to be used as a center, and the developer sets a detection area for the virtual character, wherein the detection area is an area within a preset range with the virtual character as the center. When other virtual objects (such as a second virtual object) are in the detection area, the other virtual objects can be detected by the virtual object (such as a first virtual object) and are prompted on the user interface.
In step 304, the computer device divides the detection area according to a preset angle to obtain a plurality of sub-detection areas.
Referring to fig. 1e, fig. 1e is a first schematic diagram of a detection area according to an embodiment of the present application. The detection area may be divided into a plurality of sub-detection areas including sub-detection area 111, sub-detection area 112, sub-detection area 113, sub-detection area 114, sub-detection area 115, sub-detection area 116, sub-detection area 117, and sub-detection area 118. The sub-regions may be divided equally or randomly, and are not limited herein. And determining the target azimuth of the second virtual object relative to the first virtual object according to the second position and the divided multiple sub-detection areas.
In step 305, the computer device identifies each sub-detection area to generate a first mark, so as to obtain a plurality of sub-detection areas with different first marks.
When the area is taken as the target azimuth, each divided sub-detection area can be marked, so that each sub-detection area corresponds to a first mark which is different from other sub-detection areas.
For example, the sub-detection region 111, the sub-detection region 112, the sub-detection region 113, the sub-detection region 114, the sub-detection region 115, the sub-detection region 116, the sub-detection region 117, and the sub-detection region 118 are respectively identified such that the first mark corresponding to the sub-detection region 111 is a, the first mark corresponding to the sub-detection region 112 is B, the first mark corresponding to the sub-detection region 113 is C, the first mark corresponding to the sub-detection region 114 is D, the first mark corresponding to the sub-detection region 115 is E, the first mark corresponding to the sub-detection region 116 is F, the first mark corresponding to the sub-detection region 117 is G, and the first mark corresponding to the sub-detection region 118 is H.
In step 306, when the computer device detects that the second location of the second virtual object is in the detection zone, the sub-detection zone in which the second location is determined to be the target sub-detection zone.
And determining whether the second position is located in the detection area of the first virtual object, if so, determining a sub-detection area in which the second position is located, and determining the area as a target sub-detection area.
Specifically, when the area is taken as the target azimuth, each sub-detection area can be identified, so that each sub-detection area corresponds to a first mark which is different from other sub-detection areas.
For example, if the second position is in the sub-detection area 113, the sub-detection area 113 is determined as the target sub-detection area.
In step 307, the computer device determines a first target mark corresponding to the target sub-detection area from the first marks, and determines the sub-detection area corresponding to the first target mark as the target azimuth.
And determining a first mark corresponding to the target sub-detection area as a first target mark, and determining the sub-detection area corresponding to the first target mark as a target azimuth.
For example, the sub-detection area 113 is determined as a target sub-detection area, and a first mark C corresponding to the target sub-detection area is acquired, the first mark C is determined as a first target mark, and the sub-detection area 113 corresponding to the first target mark is determined as a target azimuth.
In step 308, the computer device identifies directions between adjacent ones of the plurality of sub-detection areas to generate a second signature to obtain a plurality of candidate directions in which the second signatures are different.
When the direction is taken as the target azimuth, the candidate directions between the adjacent sub-detection areas in the divided sub-detection areas can be identified, so that the candidate directions between the adjacent sub-detection areas are corresponding to second marks which are different from other candidate directions.
In step 309, when the computer device detects that the second location of the second virtual object is in the detection zone, it is determined whether the second location is in one of a plurality of candidate directions.
For example, in fig. 1f, a candidate direction 211 between the sub-detection region 111 and the sub-detection region 112, a candidate direction 212 between the sub-detection region 112 and the sub-detection region 113, a candidate direction 213 between the sub-detection region 113 and the sub-detection region 114, a candidate direction 214 between the sub-detection region 114 and the sub-detection region 115, a candidate direction 215 between the sub-detection region 115 and the sub-detection region 116, a candidate direction 216 between the sub-detection region 116 and the sub-detection region 117, a candidate direction 217 between the sub-detection region 117 and the sub-detection region 118, and a candidate direction 218 between the sub-detection region 118 and the sub-detection region 111 are marked. And each candidate direction is identified, so as to obtain a second mark corresponding to the candidate direction 211 as a, a second mark corresponding to the candidate direction 212 as b, a second mark corresponding to the candidate direction 213 as c, a second mark corresponding to the candidate direction 214 as d, a second mark corresponding to the candidate direction 215 as e, a second mark corresponding to the candidate direction 216 as f, a second mark corresponding to the candidate direction 217 as g, and a second mark corresponding to the candidate direction 218 as h.
In step 310, if the second location is located in one of the plurality of candidate directions, the computer device determines the candidate direction in which the second location is located as the target direction.
And determining whether the second position is located in one of the candidate directions, if so, determining the candidate direction in which the second position is located, and determining the candidate direction as a target direction.
For example, when the second location is located in the candidate direction 214, the candidate direction 214 is determined as the target direction.
In step 311, the computer device determines a second target mark corresponding to the target direction from the second marks, and determines the target direction of the second target mark as the target azimuth.
And determining a second mark corresponding to the target direction as a second target mark, and determining the target direction of the second target mark as a target azimuth.
For example, when the second position is located in the candidate direction 214, the candidate direction 214 is determined as the target direction, the second target identifier corresponding to the candidate direction 214 is determined as d, and the target direction of the second target mark is determined as the target azimuth.
In step 312, the computer device displays a hint information including the target bearing within a portion of a display area of the user interface, the hint information including a textual description of the target bearing.
After determining the target azimuth, generating a prompt message according to the target azimuth, displaying the prompt message in a part of a display area of a user interface, and broadcasting the prompt message in a voice broadcasting mode. The hint information may be used to indicate that the user exists the second virtual object at the target bearing.
Specifically, referring to fig. 1h, fig. 1h illustrates a direction as a target azimuth, and the identification of the candidate direction is illustrated as a timepiece dial. The prompt information of the character of six o' clock direction is displayed in the partial display area of the user interface, and the prompt information is broadcast, so that the user can more intuitively see the prompt information, avoid the user from missing the prompt information, and improve the prompt effect of the prompt information. The partial display area is not limited to being located at a position above the center of the user interface, but may be located at other positions in the user interface, and is not limited herein.
Referring to fig. 2b, fig. 2b is a third flowchart of a method for prompting a virtual object according to an embodiment of the present application. The method flow may include:
in step 401, a computer device displays a user interface including an environment screen presented when a first virtual object observes a three-dimensional virtual environment.
In step 402, the computer device obtains a first location of a first virtual object in the three-dimensional virtual environment and a second location of a second virtual object in the three-dimensional virtual environment in response to sound made by the second virtual object in the three-dimensional virtual environment.
Step 401 and step 402 are the same as step 301 and step 302, and are not described herein.
In step 403, the computer device determines, according to the preset mapping relationship, a target sound source type corresponding to the sound emitted by the second virtual object.
The virtual object may be any one of a virtual carrier, a virtual character and a virtual weapon, so in order to help the user determine which of the second virtual objects is in the target direction, a mapping relationship between the virtual object and the sound source type may be established in advance, for example, a footstep sound is emitted by the virtual character, an engine sound is emitted by the virtual carrier, and a gun sound is emitted by the virtual weapon. Therefore, the target sound source type corresponding to the sound emitted by the second virtual object can be determined according to the preset mapping relation.
In step 404, the computer device determines a target distance value between the first location and the second location.
The target distance value between the first position and the second position can be obtained, so that when the prompt information is generated, the target distance value can be included in addition to the target azimuth and the target sound source type and displayed on the user interface.
In step 405, the computer device determines an area within a preset range centered on the first location as a detection area.
In step 406, the computer device divides the detection area according to a preset angle to obtain a plurality of sub-detection areas.
In step 407, the computer device identifies directions between adjacent sub-detection areas of the plurality of sub-detection areas to generate a second label, so as to obtain a plurality of candidate directions in which the second labels are different.
In step 408, when the computer device detects that the second location of the second virtual object is in the detection zone, it is determined whether the second location is in one of a plurality of candidate directions.
Step 405 and step 406 are the same as step 303 and step 304, step 407 and step 408 are the same as step 308 and step 309, and are not described here.
In step 409, if the second location is not located in one of the plurality of candidate directions, the computer device obtains a sub-detection area in which the second location is located.
Where the direction is taken as the target azimuth, there is also a case where the second position is not located in the candidate direction, and therefore it is necessary to set a rule such as how to deal with when the second position is located in the sub-detection area.
In step 410, the computer device divides the sub-detection area in which the second location is located into a first sub-detection area and a second sub-detection area on average.
The sub-detection areas can be divided equally, so that a divided first sub-detection area and a divided second sub-detection area are obtained.
For example, when the second position is in the sub-detection area 112, the sub-detection area 112 is divided equally, thereby obtaining a first sub-detection area 1121, and a second sub-detection area 1122.
In step 411, the computer device obtains a first candidate direction adjacent to the first sub-detection area and obtains a second candidate direction adjacent to the second sub-detection area.
Wherein a first candidate direction adjacent to the first sub-detection area and a second candidate direction adjacent to the second sub-detection area are obtained.
In step 412, if the second location is located in the first sub-detection zone, the computer device determines the first candidate direction as the target direction.
For example, if the second location is in the second sub-detection region 1122, the candidate direction 212 adjacent to the second sub-detection region 1122 is determined as the target direction.
In step 413, if the second location is located in the second sub-detection zone, the computer device determines the second candidate direction as the target direction.
Step 413 is the same as step 412, and will not be described here.
In step 414, the computer device displays a prompt including the target bearing, the target sound source type, and the target distance value in a portion of the display area of the user interface.
The target distance value between the first position and the second position can be obtained, so that when the prompt information is generated, the target distance value can be included in addition to the target azimuth and the target sound source type and displayed on the user interface.
Specifically, the prompt information in fig. 1j further includes a target distance value (50 meters), so that the target distance value in the prompt information helps the user determine the distance between the second virtual object and the first virtual object in the target azimuth.
In step 415, the computer device determines whether the second virtual object was generated for a virtual weapon based on the target sound source type.
Wherein, whether the second virtual object is a virtual weapon is determined according to the target sound source type.
In step 416, if the second virtual object is a virtual weapon, the computer device determines a target virtual weapon corresponding to the target sound source type.
If the second virtual object is a virtual weapon, it is determined, according to the mapping relationship, that the target sound source type is emitted by that weapon (for example, AK-47 or 98K).
In step 417, the computer device obtains a first attack distance value for the target virtual weapon, and obtains a second attack distance value for the virtual weapon in the first virtual object.
In the game setting, the effective attack distance values corresponding to different weapons are different, for example, the attack distance value of AK-47 is 500 meters, and the attack distance value of 98K is 1 km. Therefore, a first attack distance value of the target virtual weapon and a second attack distance value corresponding to the virtual weapon in the first virtual object can be obtained.
In step 418, when the second attack distance value is greater than the first attack distance value, the computer device determines whether the target distance value is greater than the second attack distance value.
In fig. 1k, the second virtual object is located at the X point, the first attack distance value of the target virtual weapon in the second virtual object is L1, the first virtual object is located at the Y point, and the second attack distance value of the virtual weapon in the first virtual object is L2.
In step 419, if the target distance value is greater than the second attack distance value, the computer device controls the first virtual object to move in a direction approaching the second virtual object until the distance between the second virtual object and the first virtual object is the second attack distance value.
The first virtual object can be controlled to move towards the direction close to the second virtual object until the point O, and the point O can be seen to be the position where the first virtual object can just cause effective attack on the second virtual object, and the second virtual object can not cause effective attack on the first virtual object.
As can be seen from the foregoing, in the embodiment of the present application, a user interface is displayed, where the user interface includes an environment screen that is presented when a first virtual object observes a three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target position in a part of a display area of the user interface, wherein the prompt information comprises a text description of the target position. Therefore, the prompt information is prevented from being missed by the user in a mode of displaying the prompt information in a part of the display area, and the prompt effect of the prompt information is improved.
In order to facilitate better implementation of the virtual object prompting method provided by the embodiment of the application, the embodiment of the application also provides a device based on the virtual object prompting method. The meaning of the nouns is the same as that of the prompting method of the virtual objects, and specific implementation details can be referred to the description in the embodiment of the method.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a virtual object prompting device according to an embodiment of the present application, where the virtual object prompting device may include a display module 501, a first obtaining module 502, a first determining module 503, a prompting module 504, and so on.
A display module 501, configured to display a user interface, where the user interface includes an environment screen that is presented when a first virtual object observes a three-dimensional virtual environment;
a first obtaining module 502, configured to obtain, in response to a sound made by a second virtual object in the three-dimensional virtual environment, a first position of the first virtual object in the three-dimensional virtual environment, and a second position of the second virtual object in the three-dimensional virtual environment;
a first determining module 503, configured to determine a target position of the second virtual object relative to the first virtual object based on the first location and the second location;
And the prompt module 504 is configured to display, in a partial display area of the user interface, prompt information including the target azimuth, where the prompt information includes text information of the target azimuth.
In some embodiments, the first determining module 503 includes:
the first determining submodule is used for determining an area in a preset range as a detection area by taking the first position as a center;
dividing the detection area according to a preset angle to obtain a plurality of sub detection areas;
and a second determination sub-module for determining a target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location.
In some embodiments, the second determining sub-module comprises:
the first identification unit is used for identifying each sub-detection area to generate a first mark so as to obtain a plurality of sub-detection areas with different first marks;
a first determining unit, configured to determine, when it is detected that a second position of the second virtual object is in the detection area, a sub-detection area in which the second position is located as a target sub-detection area;
and the second determining unit is used for determining a first target mark corresponding to the target sub-detection area from the first marks and determining the sub-detection area corresponding to the first target mark as a target azimuth.
In some embodiments, the second determining sub-module comprises:
the second identification unit is used for identifying the directions between adjacent sub-detection areas in the plurality of sub-detection areas to generate a second mark so as to obtain a plurality of candidate directions with different second marks;
a judging unit configured to judge whether a second position of the second virtual object is located in one of the candidate directions when the second position is detected to be located in the detection area;
a third determining unit configured to determine, as a target direction, a candidate direction in which the second position is located if the second position is located in one of the plurality of candidate directions;
and a fourth determining unit, configured to determine a second target mark corresponding to the target direction from the second marks, and determine the target direction of the second target mark as a target azimuth.
In some embodiments, the second determination submodule further includes:
the first acquisition unit is used for acquiring a sub-detection area where the second position is located if the second position is not located in one candidate direction in the plurality of candidate directions;
the dividing unit is used for equally dividing the sub-detection area where the second position is located into a first sub-detection area and a second sub-detection area;
A second obtaining unit, configured to obtain a first candidate direction adjacent to the first sub-detection area, and obtain a second candidate direction adjacent to the second sub-detection area;
and a fifth determining unit configured to determine the target direction from the first candidate direction and the second candidate direction based on a positional relationship between the second position and the first sub-detection area or a positional relationship between the second position and the second sub-detection area, and return to perform the step of determining a second target mark corresponding to the target direction from the second marks.
In some embodiments, the fifth determining unit is configured to:
if the second position is located in the first sub-detection area, determining the first candidate direction as the target direction;
and if the second position is located in the second sub-detection area, determining the second candidate direction as the target direction.
In some embodiments, the apparatus further comprises:
the second determining module is used for determining a target sound source type corresponding to the sound emitted by the second virtual object according to a preset mapping relation;
the prompt module is further configured to display, in a partial display area of the user interface, prompt information including the target azimuth and the target sound source type, where the prompt information includes a text description of the target azimuth and the target sound source type.
In some embodiments, the apparatus further comprises:
a third determining module for determining a target distance value between the first location and the second location;
and the prompt module is also used for displaying prompt information comprising the target azimuth, the target sound source type and the target distance value in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth, the target sound source type and the target distance value.
In some embodiments, the apparatus further comprises:
a fourth determining module, configured to determine whether the second virtual object is a virtual weapon according to the target sound source type;
a fifth determining module, configured to determine, if the second virtual object is a virtual weapon, a target virtual weapon corresponding to the target sound source type;
the second acquisition module is used for acquiring a first attack distance value of the target virtual weapon and acquiring a second attack distance value corresponding to the virtual weapon in the first virtual object;
the first control module is used for controlling the first virtual object to move according to the first attack distance value, the second attack distance value and the target distance value.
In some embodiments, the first control module comprises:
The judging submodule is used for judging whether the target distance value is larger than the second attack distance value or not when the second attack distance value is larger than the first attack distance value;
and the control sub-module is used for controlling the first virtual object to move towards the direction close to the second virtual object if the target distance value is larger than the second attack distance value until the distance between the second virtual object and the first virtual object is the second attack distance value.
In some embodiments, the apparatus further comprises:
the generating module is used for generating a target control and judging whether the target control receives an operation instruction in a preset time;
and the second control module is used for controlling the first virtual object to move along the direction opposite to the target azimuth if the target control receives the operation instruction within the preset time.
In some embodiments, the prompting module 504 includes:
and the broadcasting sub-module is used for displaying prompt information containing the target azimuth in a part of display areas of the user interface and broadcasting the prompt information in a voice way.
As can be seen from the foregoing, in the embodiment of the present application, the display module 501 displays a user interface, where the user interface includes an environment screen that is presented when the first virtual object observes the three-dimensional virtual environment; the first obtaining module 502 obtains a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to a sound made by the second virtual object in the three-dimensional virtual environment; the first determining module 503 determines a target position of the second virtual object relative to the first virtual object based on the first position and the second position; the prompt module 504 displays a prompt containing the target bearing in a portion of the display area of the user interface, the prompt including a textual description of the target bearing. Therefore, the prompt information is prevented from being missed by the user in a mode of displaying the prompt information in a part of the display area, and the prompt effect of the prompt information is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer device 600 includes a processor 601 having one or more processing cores, a memory 602 having one or more computer readable storage media, and a computer program stored on the memory 602 and executable on the processor. The processor 601 is electrically connected to the memory 602. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 601 is a control center of the computer device 600, connects various parts of the entire computer device 600 using various interfaces and lines, and performs various functions of the computer device 600 and processes data by running or loading software programs and/or modules stored in the memory 602, and calling data stored in the memory 602, thereby performing overall monitoring of the computer device 600.
In the embodiment of the present application, the processor 601 in the computer device 600 loads the instructions corresponding to the processes of one or more application programs into the memory 602 according to the following steps, and the processor 601 executes the application programs stored in the memory 602, so as to implement various functions:
displaying a user interface, wherein the user interface comprises an environment picture presented when a first virtual object observes a three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target position in a part of a display area of the user interface, wherein the prompt information comprises a text description of the target position.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 600 further includes: a touch display 603, a radio frequency circuit 604, an audio circuit 605, an input unit 606, and a power supply 407. The processor 601 is electrically connected to the touch display 603, the radio frequency circuit 604, the audio circuit 605, the input unit 606, and the power supply 607, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 603 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 603 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 601, and can receive and execute commands sent from the processor 601. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 601 to determine the type of touch event, and the processor 601 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 603 to implement input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 603 may also implement an input function as part of the input unit 606.
In the embodiment of the application, the processor 601 executes the game application program to generate a graphical user interface on the touch display screen 603, wherein a virtual scene on the graphical user interface contains at least one functional control or wheel control. The touch display 603 is configured to present a graphical user interface and receive an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuit 604 may be configured to receive and transmit radio frequency signals to and from a network device or other computer device via wireless communication to and from the network device or other computer device.
The audio circuit 605 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 605 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 605 and converted into audio data, which are processed by the audio data output processor 601 for transmission to, for example, another computer device via the radio frequency circuit 604, or which are output to the memory 602 for further processing. The audio circuit 605 may also include an ear bud jack to provide communication of the peripheral headphones with the computer device.
The input unit 606 may be used to receive entered numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), as well as to generate keyboard, mouse, joystick, optical, or trackball signal inputs associated with user settings and function control.
The power supply 607 is used to power the various components of the computer device 600. Alternatively, the power supply 607 may be logically connected to the processor 601 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 607 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 600 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment displays a user interface, where the user interface includes an environment screen that is presented when the first virtual object observes the three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target position in a part of a display area of the user interface, wherein the prompt information comprises a text description of the target position. Therefore, the prompt information is prevented from being missed by the user in a mode of displaying the prompt information in a part of the display area, and the prompt effect of the prompt information is improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the viewing angle adjustment methods provided by embodiments of the present application. For example, the computer program may perform the steps of:
displaying a user interface, wherein the user interface comprises an environment picture presented when a first virtual object observes a three-dimensional virtual environment; responding to sound emitted by a second virtual object in the three-dimensional virtual environment, and acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment; determining a target position of the second virtual object relative to the first virtual object based on the first location and the second location; and displaying prompt information containing the target position in a part of a display area of the user interface, wherein the prompt information comprises a text description of the target position.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any of the virtual object prompting methods provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any of the virtual object prompting methods provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail a method, apparatus, storage medium and computer device for prompting a virtual object provided in the embodiments of the present application, and specific examples are applied to illustrate principles and implementations of the present application, where the foregoing description of the embodiments is only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (15)

1. A method for prompting a virtual object, comprising:
displaying a user interface, wherein the user interface comprises an environment picture presented when a first virtual object observes a three-dimensional virtual environment;
acquiring a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to sound made by the second virtual object in the three-dimensional virtual environment;
determining an area within a preset range as a detection area by taking the first position as a center;
dividing the detection area according to a preset angle to obtain a plurality of sub detection areas;
determining a target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location;
and displaying prompt information containing the target azimuth in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth.
2. The method of claim 1, wherein determining the target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location comprises:
Marking each sub-detection area to generate a first mark so as to obtain a plurality of sub-detection areas with different first marks;
when the second position of the second virtual object is detected to be in the detection area, determining a sub-detection area in which the second position is located as a target sub-detection area;
and determining a first target mark corresponding to the target sub-detection area from the first marks, and determining the sub-detection area corresponding to the first target mark as a target azimuth.
3. The method of claim 1, wherein determining the target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location comprises:
marking the directions between adjacent sub-detection areas in the plurality of sub-detection areas to generate a second mark so as to obtain a plurality of candidate directions with different second marks;
when the second position of the second virtual object is detected to be in the detection area, judging whether the second position is positioned in one candidate direction in the plurality of candidate directions or not;
If the second position is located in one candidate direction in the plurality of candidate directions, determining the candidate direction in which the second position is located as a target direction;
and determining a second target mark corresponding to the target direction from the second marks, and determining the target direction of the second target mark as a target azimuth.
4. A method of prompting a virtual object according to claim 3, the method further comprising:
if the second position is not located in one candidate direction in the plurality of candidate directions, acquiring a sub-detection area in which the second position is located;
dividing the sub-detection area where the second position is located into a first sub-detection area and a second sub-detection area on average;
acquiring a first candidate direction adjacent to the first sub-detection area and a second candidate direction adjacent to the second sub-detection area;
and determining the target direction from the first candidate direction and the second candidate direction based on the position relation between the second position and the first sub-detection area or the position relation between the second position and the second sub-detection area, and returning to execute the step of determining a second target mark corresponding to the target direction from the second marks.
5. The method according to claim 4, wherein the step of determining the target direction from the first candidate direction and the second candidate direction based on the positional relationship between the second position and the first sub-detection area or the positional relationship between the second position and the second sub-detection area includes:
if the second position is located in the first sub-detection area, determining the first candidate direction as the target direction;
and if the second position is located in the second sub-detection area, determining the second candidate direction as the target direction.
6. The method of claim 1, further comprising, after the step of obtaining a first location of the first virtual object in the three-dimensional virtual environment and a second location of the second virtual object in the three-dimensional virtual environment:
determining a target sound source type corresponding to sound emitted by the second virtual object according to a preset mapping relation;
the step of displaying the prompt information containing the target azimuth in a part of the display area of the user interface, wherein the prompt information comprises text description of the target azimuth comprises the following steps:
And displaying prompt information containing the target azimuth and the target sound source type in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth and the target sound source type.
7. The method for prompting a virtual object according to claim 6, further comprising, after the step of determining a target sound source type corresponding to the sound emitted by the second virtual object according to the preset mapping relationship:
determining a target distance value between the first location and the second location;
the step of displaying the prompt message containing the target azimuth in a partial display area of the user interface, wherein the prompt message comprises the target azimuth and the text description of the target sound source type comprises the following steps:
and displaying prompt information comprising the target azimuth, the target sound source type and the target distance value in a part of display area of the user interface, wherein the prompt information comprises text description of the target azimuth, the target sound source type and the target distance value.
8. The method according to claim 7, further comprising, after the step of displaying the hint information including the target azimuth, the target sound source type, and the target distance value in a partial display area of the user interface:
Determining whether the second virtual object is generated for a virtual weapon according to the target sound source type;
if the second virtual object is a virtual weapon, determining a target virtual weapon corresponding to the target sound source type;
acquiring a first attack distance value of the target virtual weapon and acquiring a second attack distance value corresponding to the virtual weapon in the first virtual object;
and controlling the first virtual object to move according to the first attack distance value, the second attack distance value and the target distance value.
9. The method according to claim 8, wherein the step of controlling the movement of the first virtual object according to the first attack distance value, the second attack distance value, and the target distance value comprises:
when the second attack distance value is larger than the first attack distance value, judging whether the target distance value is larger than the second attack distance value;
and if the target distance value is larger than the second attack distance value, controlling the first virtual object to move towards the direction close to the second virtual object until the distance between the second virtual object and the first virtual object is the second attack distance value.
10. The method according to any one of claims 1 to 7, further comprising, after the step of displaying the hint information including the target location in a portion of a display area of the user interface:
generating a target control, and judging whether the target control receives an operation instruction within a preset time;
and if the target control receives the operation instruction within the preset time, controlling the first virtual object to move along the direction opposite to the target azimuth.
11. The method according to claim 10, wherein the step of displaying the prompt message including the target azimuth in a partial display area of the user interface includes:
and displaying prompt information containing the target azimuth in a part of display area of the user interface, and broadcasting the prompt information in a voice way.
12. The method of claim 1, wherein the second virtual object comprises any one of a virtual vehicle, a virtual character, and a virtual weapon.
13. A virtual object prompting device, comprising:
the display module is used for displaying a user interface, wherein the user interface comprises an environment picture which is presented when a first virtual object observes a three-dimensional virtual environment;
A first obtaining module, configured to obtain a first position of the first virtual object in the three-dimensional virtual environment and a second position of the second virtual object in the three-dimensional virtual environment in response to a sound emitted by the second virtual object in the three-dimensional virtual environment;
a first determining module configured to determine a target position of the second virtual object relative to the first virtual object based on the first location and the second location;
the first determining submodule is used for determining an area in a preset range as a detection area by taking the first position as a center;
dividing the detection area by a sub-module according to a preset angle to obtain a plurality of sub-detection areas;
a second determination sub-module for determining a target position of the second virtual object relative to the first virtual object based on the plurality of sub-detection areas and the second location;
and the prompt module is used for displaying prompt information containing the target azimuth in a part of display area of the user interface, and the prompt information comprises text description of the target azimuth.
14. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the method of prompting a virtual object according to any one of claims 1 to 12.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of prompting a virtual object according to any one of claims 1 to 12 when the program is executed by the processor.
CN202110351275.XA 2021-03-31 2021-03-31 Virtual object prompting method and device, storage medium and computer equipment Active CN113082707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351275.XA CN113082707B (en) 2021-03-31 2021-03-31 Virtual object prompting method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351275.XA CN113082707B (en) 2021-03-31 2021-03-31 Virtual object prompting method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113082707A CN113082707A (en) 2021-07-09
CN113082707B true CN113082707B (en) 2024-03-12

Family

ID=76672317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351275.XA Active CN113082707B (en) 2021-03-31 2021-03-31 Virtual object prompting method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113082707B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672084A (en) * 2021-08-03 2021-11-19 歌尔光学科技有限公司 AR display picture adjusting method and system
CN117173372A (en) * 2022-05-26 2023-12-05 腾讯科技(深圳)有限公司 Picture display method, system, device, equipment and storage medium
CN115400429A (en) * 2022-07-20 2022-11-29 网易(杭州)网络有限公司 Display method and device of position information and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011156061A (en) * 2010-01-29 2011-08-18 Konami Digital Entertainment Co Ltd Game program, game device, and game control method
CN110170170A (en) * 2019-05-30 2019-08-27 维沃移动通信有限公司 A kind of information display method and terminal device
CN111228806A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Control method and device of virtual operation object, storage medium and electronic device
CN111265874A (en) * 2020-01-20 2020-06-12 网易(杭州)网络有限公司 Method, device, equipment and storage medium for modeling target object in game
CN112044069A (en) * 2020-09-10 2020-12-08 腾讯科技(深圳)有限公司 Object prompting method, device, equipment and storage medium in virtual scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011156061A (en) * 2010-01-29 2011-08-18 Konami Digital Entertainment Co Ltd Game program, game device, and game control method
CN110170170A (en) * 2019-05-30 2019-08-27 维沃移动通信有限公司 A kind of information display method and terminal device
CN111228806A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Control method and device of virtual operation object, storage medium and electronic device
CN111265874A (en) * 2020-01-20 2020-06-12 网易(杭州)网络有限公司 Method, device, equipment and storage medium for modeling target object in game
CN112044069A (en) * 2020-09-10 2020-12-08 腾讯科技(深圳)有限公司 Object prompting method, device, equipment and storage medium in virtual scene

Also Published As

Publication number Publication date
CN113082707A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113082707B (en) Virtual object prompting method and device, storage medium and computer equipment
CN113426124B (en) Display control method and device in game, storage medium and computer equipment
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN113101657B (en) Game interface element control method, game interface element control device, computer equipment and storage medium
CN113398566A (en) Game display control method and device, storage medium and computer equipment
CN115193064A (en) Virtual object control method and device, storage medium and computer equipment
CN115225926B (en) Game live broadcast picture processing method, device, computer equipment and storage medium
CN115970284A (en) Attack method and device of virtual weapon, storage medium and computer equipment
CN112245914B (en) Viewing angle adjusting method and device, storage medium and computer equipment
CN115999153A (en) Virtual character control method and device, storage medium and terminal equipment
CN115920385A (en) Game signal feedback method and device, electronic equipment and readable storage medium
CN116139483A (en) Game function control method, game function control device, storage medium and computer equipment
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
CN113101661A (en) Accessory assembling method and device, storage medium and computer equipment
CN113577781A (en) NPC (non-player character control) method, device, equipment and medium
CN112138392A (en) Virtual object control method, device, terminal and storage medium
CN117160034A (en) Virtual accessory replacement method and device, storage medium and computer equipment
CN111437602B (en) Flight trajectory display method, device, equipment and storage medium in virtual environment
CN116139484A (en) Game function control method, game function control device, storage medium and computer equipment
WO2024139055A1 (en) Virtual weapon attack method and apparatus, storage medium, and computer device
CN116585701A (en) Game control method, game control device, computer equipment and storage medium
CN116328301A (en) Information prompting method, device, computer equipment and storage medium
CN116966544A (en) Region prompting method, device, storage medium and computer equipment
CN116474367A (en) Virtual lens control method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant