CN110585704B - Object prompting method, device, equipment and storage medium in virtual scene - Google Patents

Object prompting method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN110585704B
CN110585704B CN201910890460.9A CN201910890460A CN110585704B CN 110585704 B CN110585704 B CN 110585704B CN 201910890460 A CN201910890460 A CN 201910890460A CN 110585704 B CN110585704 B CN 110585704B
Authority
CN
China
Prior art keywords
target area
distance
virtual
radius
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910890460.9A
Other languages
Chinese (zh)
Other versions
CN110585704A (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910890460.9A priority Critical patent/CN110585704B/en
Publication of CN110585704A publication Critical patent/CN110585704A/en
Application granted granted Critical
Publication of CN110585704B publication Critical patent/CN110585704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for object prompt in a virtual scene, and relates to the technical field of networks. The method comprises the following steps: and displaying a display interface of the application program, acquiring position information of the virtual object as a shooting source and the virtual object as a shooting target, and shielding the object prompt function when a connecting line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.

Description

Object prompting method, device, equipment and storage medium in virtual scene
Technical Field
The present application relates to the field of network technologies, and in particular, to a method, an apparatus, a device, and a storage medium for object prompting in a virtual scene.
Background
The smoke bomb is provided in a plurality of application programs (such as a virtual reality application program, a three-dimensional map program, a military simulation program, a first-person shooting game, a multi-person online tactical competition game and the like) for constructing a virtual scene so as to achieve the purpose of shielding the sight.
In the related art, when the terminal displays the virtual scene at the first-person perspective, a quasi-star pattern with an object prompting function may be displayed in the virtual scene, that is, when there are other virtual objects (such as another soldier in a battle) in a direction pointed by the quasi-star pattern, and there is no direct shielding of an object with a collision volume between the virtual object currently controlled by the terminal and the other virtual objects, the quasi-star pattern may be changed to prompt.
Areas which have no collision volume such as smoke and can shield the sight line may exist in the virtual scene, and when smoke areas exist between the virtual object currently controlled by the terminal and other virtual objects, the sight pattern also changes according to the object prompt function, so that the interface display of the virtual scene is inaccurate, and the interface display effect is influenced.
Disclosure of Invention
The embodiment of the application provides an object prompting method, an object prompting device, equipment and a storage medium in a virtual scene, which can be used for improving the interface display effect of the virtual scene with an object prompting function and a smoke area, and the technical scheme is as follows:
in one aspect, a method for prompting an object in a virtual scene is provided, where the method includes:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
acquiring a first position and a second position, wherein the first position is the position of a first virtual object controlled by the terminal, and the second position is the position of a second virtual object except the first virtual object in the virtual scene;
and when a connecting line between the first position and the second position passes through the target area, shielding an object prompting function, wherein the object prompting function is used for prompting that other virtual objects except the first virtual object exist in the direction pointed by the sight bead pattern.
In one aspect, a method for prompting an object in a virtual scene is provided, where the method includes:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
when a connecting line between a first position and a second position passes through the target area and no collision-capable object exists between the first position and the second position, if the sight pattern is over against a second virtual object, the sight pattern is displayed in a non-prompting state; the sight bead pattern in the non-prompting state is used for prompting that no other virtual objects except the first virtual object exist in the pointing direction;
when the connecting line between the first position and the second position does not pass through the target area and no collision-capable object exists between the first position and the second position, if the sight bead pattern is directly opposite to the second virtual object, displaying the sight bead pattern in a prompt state; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction;
wherein the first position is a position of the first virtual object controlled by the terminal, and the second position is a position of the second virtual object in the virtual scene other than the first virtual object.
In one aspect, an object hinting apparatus in a virtual scene is provided, the apparatus comprising:
the display module is used for displaying a display interface of an application program, and the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
a first obtaining module, configured to obtain a first position and a second position, where the first position is a position of a first virtual object controlled by the terminal, and the second position is a position of a second virtual object in the virtual scene except the first virtual object;
and the shielding function module is used for shielding an object prompting function when a connecting line between the first position and the second position passes through the target area, and the object prompting function is used for prompting that other virtual objects except the first virtual object exist in the direction pointed by the sight pattern.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the trigger position of the target area before the shielding function module shields the object prompt function;
the third acquisition module is used for acquiring the radius of the target area;
a first determining module, configured to determine a position relationship between the connection line and the target area according to a trigger position of the target area and a radius of the target area; the positional relationship includes: the connecting line passes through the target area, or the connecting line does not pass through the target area.
Optionally, the first determining module includes:
the first obtaining submodule is used for obtaining a first distance from the trigger position to the connecting line when the first position, the second position and the trigger position are in the same horizontal plane;
the first determining submodule is used for determining that the connecting line passes through the target area if the first distance is not larger than the radius of the target area;
the first determining submodule is further configured to determine that the connection line does not pass through the target area if the first distance is greater than the radius of the target area.
Optionally, the first determining module further includes:
the construction submodule is used for constructing a target sphere by taking the triggering position of the target area as a sphere center, and the radius of the target sphere is the radius of the target area;
the second determining submodule is used for determining a section circle in the target sphere, wherein the section circle is a cut-off circle of a vertical plane where the first position and the second position are located in the target sphere;
the second obtaining submodule is used for obtaining the radius of the section circle and a second distance, wherein the second distance is the distance from the circle center of the section circle to the connecting line, or the second distance is the distance from the intersection point of a straight line which passes through the circle center of the section circle and is vertically upward and the connecting line to the connecting line of the circle center of the section circle;
a third determining submodule, configured to determine that the connection line passes through the target area if the second distance is not greater than the radius of the cross-sectional circle; if the second distance is greater than the radius of the cross-sectional circle, determining that the connecting line does not pass through the target area.
Optionally, the third obtaining module includes:
a third obtaining submodule, configured to obtain a triggered duration of the target area;
and the fourth obtaining submodule is used for obtaining the radius of the target area according to the triggered time length of the target area.
Optionally, the first determining module further includes:
a fifth obtaining submodule, configured to obtain a third distance between the first position and the second position, a fourth distance between the first position and the trigger position, and a fifth distance between the second position and the trigger position;
the first determining module is configured to, if the third distance is not smaller than any one of the fourth distance and the fifth distance, perform a step of determining a position relationship between the connection line and the target area according to a trigger position of the target area and a radius of the target area.
Optionally, the shielding function module is configured to, when a connection line between the first location and the second location passes through the target area and there is no collision-capable object between the first location and the second location, display the sight pattern in a non-prompt state if the sight pattern is directly opposite to the second virtual object; the sight bead pattern in the non-prompting state is used for prompting that no other virtual objects except the first virtual object exist in the pointing direction.
Optionally, the apparatus further comprises a control unit,
a display module, configured to display the sight pattern in a prompt state if the sight pattern is directly facing the second virtual object when a connection line between the first location and the second location does not pass through the target area and no collision-enabling object exists between the first location and the second location; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction.
Optionally, the target area is a smoke area triggered by a smoke prop in the virtual scene being used.
In one aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the object hinting method in the above virtual scene.
In one aspect, a computer-readable storage medium is provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the object hinting method in the above virtual scene.
The technical scheme provided by the application can comprise the following beneficial effects:
and acquiring the position information of the virtual object as a shooting source and the position information of the virtual object as a shooting target by displaying a display interface of an application program, and shielding the object prompt function when a connecting line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a scene screen schematic diagram of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for object hinting in a virtual scene provided in an exemplary embodiment of the present application;
fig. 4 is an interface diagram of an object prompting method in a virtual scene according to an embodiment of the present application;
FIG. 5 is an interface diagram of an object hinting method in another virtual scene according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for object hinting in a virtual scene provided in an exemplary embodiment of the present application;
fig. 7 is a schematic distance relationship diagram of an object prompting method in a virtual scene according to an embodiment of the present application;
fig. 8 is a schematic view of a display scene of an object prompt method in another virtual scene according to an embodiment of the present application;
fig. 9 is a schematic view of a display scene of an object prompt method in another virtual scene according to an embodiment of the present application;
fig. 10 is a schematic diagram of a display scene of an object prompt method in another virtual scene according to an embodiment of the present application;
fig. 11 is a top view of a display scene of an object hinting method in a virtual scene according to an embodiment of the present application;
fig. 12 is a top view of a display scene of an object hinting method in another virtual scene according to an embodiment of the present application;
fig. 13 is a schematic view of a display scene of an object hinting method in another virtual scene according to an embodiment of the present application;
fig. 14 is a schematic diagram of a display scene of an object prompt method in another virtual scene according to an embodiment of the present application;
fig. 15 is a schematic view of a display scene of an object hinting method in another virtual scene according to an embodiment of the present application;
fig. 16 is a schematic diagram of a display scene of an object prompt method in another virtual scene according to an embodiment of the present application;
FIG. 17 is a flowchart of a method for object hinting in a virtual scene provided in an exemplary embodiment of the present application;
FIG. 18 is a flowchart of a method for object hinting in a virtual scene provided in an exemplary embodiment of the present application;
FIG. 19 is a block diagram of an object hinting apparatus in a virtual scene provided in accordance with an exemplary embodiment of the present application;
FIG. 20 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The embodiment of the application provides an object prompting method in a virtual scene, which can optimize the shielding effect of smoke on a shot target and improve the accuracy of the object prompting method in a virtual environment with smoke. For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene is also used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene is also used for fighting between at least two virtual characters using a virtual firearm within a target area, which may become increasingly smaller over time in the virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and an animation character. Optionally, when the virtual scene is a three-dimensional virtual environment, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene.
3) Smoke bomb
The smog bomb is a throwing type tactical prop applied to a virtual scene, smog can be released after the smog bomb is thrown and acts, the smog can be generated gradually along with the lapse of time, the smog can be gradually dissipated after the smog generating range reaches the maximum, the smog finally disappears, the smog shielding effect at the smog center is the best, the edge shielding effect is the worst, the conditions of each other cannot be observed by the inside and the outside during the existence of the smog, and meanwhile, a shooting source and a ray tracing track can also be shielded.
Referring to fig. 1, a schematic structural diagram of a terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a capacitive touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The capacitive touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the capacitive touch system 150 is used to detect a touch operation performed by a user on the display component or the key.
The power supply 160 is used to power the various other components in the terminal.
In the embodiment of the present application, the processor in the main board 110 may generate a virtual scene by executing or calling the program code and data stored in the memory, and expose the generated virtual scene through the external input/output device 120. In the process of displaying the virtual scene, the capacitive touch system 150 may detect a touch operation performed when the user interacts with the virtual scene.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the virtual scene as a three-dimensional virtual scene as an example, please refer to fig. 2, which shows a scene picture schematic diagram of the virtual scene provided in an exemplary embodiment of the present application. As shown in fig. 1, the scene screen 200 of the virtual scene includes a virtual object 210, an environment screen 220 of the three-dimensional virtual scene, at least one set of virtual control buttons 230, and a virtual object 240. The virtual object 210 may be a current control object of a user corresponding to the terminal, and the virtual control button 230 is an optional control element, that is, the user may control the virtual object 210 through the virtual control button 230; the virtual object 240 may be a non-user-controlled object, that is, the virtual object 240 is controlled by the application program, or the virtual object 240 may also be a virtual object controlled by a user corresponding to another terminal, and the user may interact with the virtual object 240 by controlling the virtual object 210, for example, controlling the virtual object 210 to attack the virtual object 240.
In fig. 2, the virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the virtual object 210, and exemplarily, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The virtual object 210 can move instantly under the control of the user, for example, the virtual control button 230 shown in fig. 2 is a virtual button for controlling the movement of the virtual object 210, and when the user touches the virtual control button 230, the virtual object 210 can move in the virtual scene to the direction of the touch point relative to the center of the virtual control button 230.
Referring to fig. 3, a flowchart of an object hinting method in a virtual scene according to an exemplary embodiment of the present application is shown. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 3, the object prompting method in the virtual scene includes the following steps:
step 310, displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields the sight line and has no collision volume; the display interface is superposed with and displayed with a sight bead pattern.
Optionally, the display interface of the application program may be a display interface displayed in a first-person perspective.
Taking a display interface displayed from a first-person perspective as an example, please refer to fig. 4, and fig. 4 shows an interface diagram of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 4, the display interface at least includes a virtual object 410, a virtual object 420, a target area 430 and a sight pattern 440, wherein the virtual object 410 is a shooting source and the virtual object 420 is a shooting target.
Please refer to fig. 5, and fig. 5 shows an interface diagram of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 5, the display interface at least includes a virtual object 510, a virtual object 520, and a target area 530, wherein the virtual object 510 is a shooting source and the virtual object 520 is a shooting target.
The target area plays a role in shielding a sight line, and the sight line of the virtual object cannot pass through the target area, that is, the virtual object located on one side of the target area or in the target area cannot directly observe a picture in the target area or on the other side of the target area.
Furthermore, the target region is a spatial region of the collision-free volume, i.e. a virtual object can traverse the target region without obstacles.
In one possible approach, the target area may be a smoke area.
In step 320, a first position and a second position are obtained, wherein the first position is a position of a first virtual object controlled by the terminal, and the second position is a position of a second virtual object except the first virtual object in the virtual scene.
As shown in fig. 4, the first position is a position of the virtual object 410, which is an object controlled by the current terminal, and the virtual object 410 is a shooting source, the second position is a position of the virtual object 420, and the virtual object 420 is a shooting target of the virtual object 410. The number of virtual objects 420 is at least 1.
In the embodiments of the present application, the present application is described by taking one shooting source and one shooting target as an example.
And step 330, when the connecting line between the first position and the second position passes through the target area, shielding an object prompting function, wherein the object prompting function is used for prompting that other virtual objects except the first virtual object exist in the direction pointed by the sight bead pattern.
Optionally, when the connection line between the first location and the second location does not pass through the target area, the terminal may enable an object prompt function.
In the embodiment of the present application, as shown in fig. 4, the line connecting the first position and the second position passes through the target area means that a line segment connecting the shooting source 410 as a starting point and the shooting target 430 as an ending point passes through the target area 430.
Alternatively, as shown in fig. 4, in the interface displayed by the application program, the irregular target area that can be observed by the user is the area 450, the area 450 is the effect of art rendering, and the generation, disappearance and judgment of the target area depend on the computer program to calculate the defined range, i.e., the effective target area 430. The target areas referred to in this application are all valid target areas, namely target area 430.
In this embodiment, optionally, the object prompt function may be that the color of the quasi-star pattern is changed, the shape of the quasi-star pattern is changed, the size of the quasi-star pattern is changed, and the like.
In summary, the object prompt method in the virtual scene provided in the embodiment of the present application obtains the position information of the virtual object as the shooting source and the virtual object as the shooting target through the display interface of the display application, and shields the object prompt function when the connection line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
Referring to fig. 6, a flowchart of an object hinting method in a virtual scene according to an exemplary embodiment of the present application is shown. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 6, the object prompting method in the virtual scene includes the following steps:
step 6010, displaying a display interface of the application program, where the display interface includes a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields the sight line and has no collision volume; the display interface is superposed with and displayed with a sight bead pattern.
The display scenario to which the application is directed may be the display scenario as shown in fig. 4 or fig. 5.
Optionally, the target area is a smoke area formed by the smoke prop in the virtual scene when used.
For example, in the display scene shown in fig. 4, the smoke region is the target region 430.
Optionally, in this embodiment of the application, the range of the smoke region is approximately a sphere centered on the trigger position of the smoke prop, that is, the center of the detonation point of the smoke prop.
The diffusion direction, the diffusion area and the diffusion shape of the smoke region are controlled according to preset control logic, and optionally, according to the shapes of different smoke regions, the range of the smoke region can be approximated to other polyhedrons, such as a cuboid, a cube and the like.
In the embodiment of the present application, a description will be given by taking as an example that the smoke region is approximated to a sphere centered on the trigger position of the smoke prop.
Step 6020, acquiring a first position and a second position, wherein the first position is a position of a first virtual object controlled by the terminal, and the second position is a position of a second virtual object except the first virtual object in the virtual scene.
In this embodiment of the application, the terminal may acquire a space coordinate of a currently controlled first virtual object in the virtual scene and a space coordinate of a second virtual object in the virtual scene, acquire the space coordinate of the first virtual object in the virtual scene as a first position, and acquire the space coordinate of the second virtual object in the virtual scene as a second position.
Step 6030, the trigger position of the target area is obtained.
Wherein, the target area is an effective smoke area formed by the smoke prop.
Optionally, the trigger position may be a position where an explosion point is located after the smoke prop is used.
Optionally, the smoke prop may be a smoke bomb in a virtual scene.
Step 6040, obtain the radius of the target area.
In this embodiment, the radius of the target area is the sphere radius of the sphere-like effective smoke area formed by the smoke prop, and the step 640 includes:
substep 6041, obtaining the triggered duration of the target region;
and a sub-step 6042 of obtaining a radius of the target region according to the triggered duration of the target region.
Wherein the range of the target area changes with time. For example, after the smoke prop is used, a smoke area (i.e., the above target area) can be gradually formed from the detonation point of the smoke prop to the periphery, and the smoke area gradually expands until the smoke area is maximum, then the smoke area starts to be gradually dissipated, and finally disappears.
In the process from generation to disappearance of the target area, the radius of the target area may increase and then decrease with time.
Step 6050, obtain a third distance between the first position and the second position, a fourth distance between the first position and the trigger position, and a fifth distance between the second position and the trigger position.
Please refer to fig. 7, which illustrates a distance relationship diagram of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 7, the virtual object 710 is a terminal control object, and the virtual object 720 is a control object other than the terminal control object. The distance between the virtual object 710 and the virtual object 720 is S3, the distance from the virtual object 710 to the trigger position 730 of the smoke prop is S4, the distance from the virtual object 720 to the trigger position 730 of the smoke prop is S5, and the radius of the target area is R.
Step 6060, determine whether the fourth distance and the fifth distance are both greater than the radius of the target area; if yes, go to step 6070, otherwise, go to step 6100.
If the fourth distance and the fifth distance are both greater than the radius of the target area, which indicates that the first virtual object and the second virtual object are both outside the target area, step 6070 may be performed to further perform the step of detecting whether the ray between the first position and the second position passes through the target area.
Otherwise, if the fourth distance and the fifth distance are not greater than the radius of the target area at the same time, it indicates that at least one of the first virtual object and the second virtual object is in the target area, and then step 6100 may be entered to mask the object prompt function.
For example, if the fourth distance is smaller than the radius of the target area and the fifth distance is larger than the radius of the target area, please refer to fig. 8, which shows a schematic view of a display scene of the object prompting method in the virtual scene according to the embodiment of the present application, at a certain time, the positions of the two parties of the battle are as shown in fig. 8, the virtual object 710 is located in the target area, the virtual object 720 is located outside the target area, that is, S4 is smaller than R, S5 is larger than R, taking the virtual object 710 as a shooting source and the virtual object 720 as a shooting target as an example, at this time, the terminal may proceed to step 6100.
For another example, if the fifth distance is smaller than the radius of the target area and the fourth distance is larger than the radius of the target area, please refer to fig. 9, which shows a schematic view of a display scene of the object prompting method in the virtual scene according to the embodiment of the present application, at a certain time, the positions of the two parties of the battle are as shown in fig. 9, the virtual object 720 is located in the target area, the virtual object 710 is located outside the target area, that is, S5 is smaller than R, S4 is larger than R, taking the virtual object 710 as a shooting source and the virtual object 720 as a shooting target as an example, at this time, the terminal may proceed to step 6100.
For another example, if the fourth distance and the fifth distance are both smaller than the target radius, please refer to fig. 10, which shows a schematic view of a display scene of the object prompting method in the virtual scene according to the embodiment of the present application, at a certain time, the positions of the two parties of the battle are as shown in fig. 10, the virtual object 710 and the virtual object 720 are both located in the target area, that is, S4 is smaller than R, S5 is also smaller than R, taking the virtual object 710 as a shooting source and the virtual object 720 as a shooting target as an example, at this time, the terminal may proceed to step 6100.
Step 6070, determine whether the third distance is not less than any of the fourth distance and the fifth distance; if yes, go to step 6080, otherwise, go to step 6110.
In this embodiment, if the third distance is smaller than any one of the fourth distance and the fifth distance, it indicates that the first virtual object and the second virtual object may not be located on the same side of the target area, and in this case, step 6080 needs to be performed to further perform the step of detecting whether the ray between the first position and the second position passes through the target area.
For example, at a certain time, as shown in fig. 7, the two parties of the battle are located, the virtual object 710 and the virtual object 720 are located on the same side of the target area, taking the virtual object 710 as the shooting source and the virtual object 720 as the shooting target, for example, if S3 is not smaller than S4 and not smaller than S5, the process may proceed to step 6080, and if S3 is smaller than either of S5 and S4, the process proceeds to step 6110.
On the contrary, if the third distance is not less than any of the fourth distance and the fifth distance, it is indicated that the first virtual object and the second virtual object may be located on the same side of the target area, at this time, it may be determined that the target area is not covered between the first virtual object and the second virtual object, and step 6110 may be performed.
Step 6080, determining a position relation between the connecting line and the target area according to the trigger position of the target area and the radius of the target area; the positional relationship includes: the link passes through the target area, or the link does not pass through the target area.
Optionally, in a possible implementation scheme, when the first position, the second position, and the trigger position are on the same horizontal plane, the terminal may obtain a first distance from the trigger position to the connection line; wherein, the first position, the second position and the triggering position are in the same horizontal plane, which can be approximately in the same plane; if the first distance is not greater than the radius of the target area, the terminal may determine that the connection line passes through the target area; if the first distance is greater than the radius of the target area, it is determined that the connection line does not pass through the target area.
Please refer to fig. 11, which illustrates a top view of a display scene of an object hinting method in a virtual scene according to an embodiment of the present application. As shown in fig. 11, the smoke prop is used in a virtual scene to release smoke, at a certain time, two parties of a battle are located as shown in fig. 11, where a virtual object 710 is a terminal control object, a virtual object 720 is a control object other than the terminal control object, the virtual object 710 is a shooting source, the virtual object 720 is a shooting target, a distance from a trigger position 730 of the smoke prop to a connection line between the virtual object 710 and the virtual object 720 is S1, a first distance is not greater than a radius of a target area, that is, S1 is not greater than R, and the connection line passes through the target area.
Please refer to fig. 12, which illustrates a top view of a display scene of an object hinting method in a virtual scene according to an embodiment of the present application. As shown in fig. 12, the smoke prop is used in a virtual scene to release smoke, at a certain time, the two parties of the battle are located as shown in fig. 12, where the virtual object 710 is a terminal control object, the virtual object 720 is a control object other than the terminal control object, the virtual object 710 is a shooting source, the virtual object 720 is a shooting target, the distance from the trigger position 730 of the smoke prop to a connection line between the virtual object 710 and the virtual object 720 is S1, the first distance is greater than the radius of the target area, that is, S1 is greater than R, and the connection line does not pass through the target area.
Optionally, in another possible implementation scheme, the terminal constructs a target sphere by using the trigger position of the target area as a sphere center, and the radius of the target sphere is the radius of the target area; determining a section circle in the target sphere, wherein the section circle is a cut circle of a vertical plane where the first position and the second position are located in the target sphere; acquiring the radius of the cross-sectional circle and a second distance, wherein the second distance is the distance from the center of the cross-sectional circle to the connecting line, or the second distance is the distance from the intersection point of a straight line which passes through the center of the cross-sectional circle and is vertically upward and the connecting line to the connecting line of the center of the cross-sectional circle; if the second distance is not larger than the radius of the section circle, determining that the connecting line passes through the target area; if the second distance is greater than the radius of the cross-sectional circle, determining that the link does not pass through the target area.
In a possible case, please refer to fig. 13, which illustrates a schematic view of a display scene of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 13, the virtual object 710 is a shooting source, the virtual object 720 is a shooting target, and the vertical plane where the virtual object 710 and the virtual object 720 are located does not intersect with the target sphere 750, so the connecting line between the virtual object 710 and the virtual object 720 does not pass through the target area.
In another possible case, please refer to fig. 14, which illustrates a schematic view of a display scene of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 14, when the virtual object 710 is used as a shooting source, the virtual object 720 is used as a shooting target, and the vertical plane where the virtual object 710 and the virtual object 720 are located intersects with the target sphere 750, a cross-sectional circle 760 is generated, the center of the cross-sectional circle is 770, the distance from the center of the cross-sectional circle to the connecting line between the virtual object 710 and the virtual object 720 is S2, and S2 is not greater than the radius R of the cross-sectional circle, then the connecting line passes through the target area.
In another embodiment provided herein, the first location and the second location may not be on the same horizontal plane, e.g., as shown in fig. 5, virtual object 510 is on a higher plane than virtual object 520. The scheme of the application can be simultaneously applied to the judgment of the two situations that the first position and the second position are on the same horizontal plane or not on the same horizontal plane.
For example, please refer to fig. 15, which shows a schematic view of a display scene of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 15, taking the virtual object 710 as a shooting source and the virtual object 720 as a shooting target, for example, a vertical plane where the virtual object 710 and the virtual object 720 are located intersects with the target sphere 750 to generate a cross-sectional circle 760 with a center of the cross-sectional circle 770, a distance from the center of the cross-sectional circle to a connection line between the virtual object 710 and the virtual object 720 is S2, and S2 is greater than the radius R of the cross-sectional circle, the terminal may determine that the connection line does not pass through the target area; or, the distance from the intersection point of the straight line passing through the center 770 of the cross-sectional circle and going vertically upward to the connecting line with the center 770 of the cross-sectional circle is S6, and S6 is greater than the radius R of the cross-sectional circle, the terminal may determine that the connecting line does not pass through the target area.
In another possible case, please refer to fig. 16, which illustrates a schematic view of a display scene of an object prompting method in a virtual scene according to an embodiment of the present application. As shown in fig. 16, taking the virtual object 710 as a shooting source and the virtual object 720 as a shooting target, for example, a vertical plane where the virtual object 710 and the virtual object 720 are located intersects with the target sphere 750 to generate a cross-sectional circle 760 with a center of the cross-sectional circle 770, a distance from the center of the cross-sectional circle to a connection line between the virtual object 710 and the virtual object 720 is S2, and S2 is not greater than a radius R of the cross-sectional circle, the terminal may determine that the connection line passes through the target area; or, the distance from the intersection point of the straight line passing through the center 770 of the cross-sectional circle and going vertically upwards to the connecting line with the center 770 of the cross-sectional circle is S6, and S6 is not greater than the radius R of the cross-sectional circle, the terminal may determine that the connecting line passes through the target area.
Step 6090, determine whether the connection line between the first location and the second location passes through the target area, if yes, go to step 6100, otherwise, go to step 6110.
In the embodiment of the present application, when the connection line between the first location and the second location passes through the target area, the terminal may proceed to step 6100 to shield the object indication function, wherein the object indication function is used to indicate that there are other virtual objects besides the first virtual object in the direction pointed by the front sight pattern. Accordingly, when the connection line between the first location and the second location does not pass through the target area, the terminal may proceed to step 6110 to turn on the object prompt function.
Step 6100, mask object prompt function.
Optionally, when a connecting line between the first position and the second position passes through the target area and no collision-capable object exists between the first position and the second position, if the sight pattern is directly opposite to the second virtual object, the sight pattern is displayed in a non-prompting state; the sight pattern in the non-prompting state is used for prompting that no other virtual objects except the first virtual object exist in the pointing direction.
The collision-able object refers to a virtual object having a collision volume in a virtual scene, for example, the collision-able object may be a wall, a virtual vehicle, a hill, or a building.
For example, even if a second virtual object other than the first virtual object is present in or behind the target region of the direction indicated by the sight, and there is no obstacle from the collision object between the first virtual object and the second virtual object, the sight pattern is in the non-presentation state, that is, the color, shape, size, and the like of the sight pattern are not changed, as shown in fig. 4, 8, 9, 10, and 14.
Step 6110, start object prompt function.
Optionally, when the connection line between the first position and the second position does not pass through the target area and no collision-enabling object exists between the first position and the second position, if the foresight pattern is directly opposite to the second virtual object, the foresight pattern is displayed in a prompt state; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction.
For example, when the quasi-star pattern shown in fig. 7 and 13 is within the target area of the direction indicated by the quasi-star or behind the target area by a second virtual object other than the first virtual object, if there is no blocking of the collision object between the first virtual object and the second virtual object, the quasi-star pattern may be in a prompting state, for example, it may be represented as a change in color, shape, size, and the like of the quasi-star pattern.
In summary, the object prompt method in the virtual scene provided in the embodiment of the present application obtains the position information of the virtual object as the shooting source and the virtual object as the shooting target through the display interface of the display application, and shields the object prompt function when the connection line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
Referring to fig. 17, a flowchart of an object hinting method in a virtual scene according to an exemplary embodiment of the present application is shown. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 17, the object prompting method in the virtual scene includes the following steps:
step 1710, displaying a display interface of the application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields the sight line and has no collision volume; the display interface is superposed with and displayed with a sight bead pattern.
Step 1720, when a connecting line between the first position and the second position passes through the target area and no collision-capable object exists between the first position and the second position, if the sight pattern is over against the second virtual object, displaying the sight pattern in a non-prompting state; the sight bead pattern in the non-prompting state is used for prompting that no other virtual object except the first virtual object exists in the pointing direction.
Step 1730, when the connection line between the first location and the second location does not pass through the target area and no collision-capable object exists between the first location and the second location, if the sight pattern is directly opposite to the second virtual object, displaying the sight pattern in a prompt state; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction;
the first position is the position of a first virtual object controlled by the terminal, and the second position is the position of a second virtual object in the virtual scene except the first virtual object.
The implementation process of step 1710 to step 1730 may refer to the related description in the embodiment shown in fig. 3 or fig. 6, and is not described herein again.
In summary, the object prompt method in the virtual scene provided in the embodiment of the present application obtains the position information of the virtual object as the shooting source and the virtual object as the shooting target through the display interface of the display application, and shields the object prompt function when the connection line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
Taking the virtual scene as an example of a virtual scene of a battle game, please refer to fig. 18, which shows a flowchart of an object prompting method in a virtual scene according to an exemplary embodiment of the present application. Taking a game scene as an example, the object prompting method comprises the following steps:
the smoke bomb is thrown and smoke is generated (S1810). In the embodiment of the application, an enemy target or a player throws a smoke bomb and generates smoke, and the smoke is a special effect picture and cannot be subjected to collision detection.
Alternatively, in the embodiment of the present application, the problem of detection is solved by physical calculation, for example, the distance between two points is used for judging and the intersection of a straight line and a circle is used for judging whether the smoke blocking ray can be detected.
The terminal judging whether the player is in the smoke (S1820); if so, the process proceeds to S1860, otherwise, the process proceeds to S1830.
The terminal calculates the distance OA between the player and the smoke center, wherein O is the smoke center, A is the position of the player, O is the sphere center, and R is the radius to form a sphere which represents the smoke range:
when OA is less than or equal to R, the player is in the smoke range generated by the smoke bomb, and the terminal shields the object prompting function (S1860);
when OA is greater than R, the player is not in the smoke range generated by the smoke bomb, and the terminal determines whether an enemy target is in the smoke (S1830); if so, the process proceeds to S1860, otherwise, the process proceeds to S1840.
The terminal calculates the distance OB between an enemy target and a smoke center, wherein O is the smoke center, B is the position of the enemy target, O is the sphere center, and R is the radius to form a sphere which represents the smoke range:
when OB is smaller than or equal to R, an enemy target is in a smoke range generated by a smoke bomb, and the terminal shields an object prompt function (S1860);
when OB is larger than R, the enemy target is not in the smoke range generated by the smoke bomb, and the terminal judges whether the player and the enemy target are on the same side of the smoke (S1840); if so, the process proceeds to S1870, otherwise, the process proceeds to S1850.
The terminal may compare the magnitude relationship of AB to OA, OB by calculating the distance AB between the player and the enemy object, the distance OA between the player and the smoke center, the distance OB between the enemy object and the smoke center:
when AB is smaller than OA and OB at the same time, the player and the enemy object are on the same side of smoke, and the terminal turns on the object prompt function (S1870).
When AB is not less than OA and OB at the same time, the player and the enemy object are on different sides of the smoke, and the terminal judges whether a connection line between the player and the enemy object passes through the smoke (S1850); if so, the process proceeds to S1860, otherwise, the process proceeds to S1870.
Alternatively, in one possible scenario, the player is in the same plane as the enemy object.
The terminal calculates the distance d between the smoke center O and the straight line formed by connecting the player and the enemy target and passing through A, B, and compares the magnitude relation between d and R:
when d is less than or equal to R, a connecting line between the player and the enemy target passes through smoke, and the terminal shields an object prompting function (S1860);
when d is greater than R, the connection between the player and the enemy target does not pass through smoke, and the terminal turns on an object prompt function (S1870).
Optionally, the determining step may be completed by the following program codes:
float x1=source.x;
float y1=source.z;
float x2=target.x;
float y2=target.z;
if (x2.equals (x1)) x1+ (0.1 f); // preventing exactly equal cases
float k=(y2-y1)/(x2-x1);
float b=(y1*y2-x1*y2)/(x2-x1);
float x0=smokePos.x;
float y0=smokePos.z;
float r=m_CurrentRegionSize;
float d=Mathf.Abs(k*x0-y0+b)/Mathf.Sqrt(k*k+1);
Alternatively, the player and the enemy target may not be in the same plane, as the case may be. Taking the player in a plane higher than the enemy object as an example, the terminal forms a straight line passing through A, B two points by connecting the player and the enemy object, the vertical plane where the straight line is located intersects a plane C with a sphere taking the smoke center O as the center of sphere and R as the radius, the plane C is a circle taking D as the center of circle and R as the radius.
Optionally, the distance from the circle center D to the straight line where the player and the enemy target are located is F, and the magnitude relation between F and r is compared:
when F is less than or equal to r, a connecting line between the player and the enemy target passes through smoke, and the terminal shields an object prompting function (S1860);
when F is greater than r, the connection between the player and the enemy target does not pass through smoke, and the terminal turns on an object prompt function (S1870).
Optionally, making a straight line vertically upward through the circle center D and intersecting a connecting line between the player and the enemy target at a point G, and comparing the magnitude relation between DG and r by the terminal:
when DG is less than or equal to r, the connection line between the player and the enemy target passes through smoke, and the terminal shields the object prompting function (S1860);
when DG is greater than r, the connection between the player and the enemy target does not pass through smoke, and the terminal turns on the object prompt function (S1870).
In summary, the object prompt method in the virtual scene provided in the embodiment of the present application obtains the position information of the virtual object as the shooting source and the virtual object as the shooting target through the display interface of the display application, and shields the object prompt function when the connection line between the shooting source and the shooting target passes through the target area. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
Referring to fig. 19, a block diagram of an object hinting apparatus in a virtual scene according to an exemplary embodiment of the present application is shown. The object prompting device is applied to a terminal, wherein the terminal can be the terminal shown in fig. 1. As shown in fig. 19, the object presentation apparatus in the virtual scene includes:
a display module 1910, configured to display a display interface of an application, where the display interface includes a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields the sight line and has no collision volume; a front sight pattern is superposed and displayed in the display interface;
a first obtaining module 1920, configured to obtain a first location and a second location, where the first location is a location of a first virtual object controlled by the terminal, and the second location is a location of a second virtual object in the virtual scene except the first virtual object;
a shielding function module 1930, configured to shield an object prompting function when a connecting line between the first location and the second location passes through the target area, where the object prompting function is used to prompt that a virtual object other than the first virtual object exists in a direction pointed by the front sight pattern.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain a trigger position of the target area before the shielding function module 1930 shields the object prompt function;
the third acquisition module is used for acquiring the radius of the target area;
the first determining module is used for determining the position relation between the connecting line and the target area according to the triggering position of the target area and the radius of the target area; the positional relationship includes: the link passes through the target area, or the link does not pass through the target area.
Optionally, the first determining module includes:
the first obtaining submodule is used for obtaining a first distance from the triggering position to the connecting line when the first position, the second position and the triggering position are positioned on the same horizontal plane;
the first determining submodule is used for determining that the connecting line passes through the target area if the first distance is not larger than the radius of the target area;
the first determining submodule is further used for determining that the connecting line does not pass through the target area if the first distance is larger than the radius of the target area.
Optionally, the first determining module further includes:
the construction submodule is used for constructing a target sphere by taking the trigger position of the target area as the sphere center, and the radius of the target sphere is the radius of the target area;
the second determining submodule is used for determining a section circle in the target sphere, wherein the section circle is a cut circle of a vertical plane where the first position and the second position are located in the target sphere;
the second obtaining submodule is used for obtaining the radius of the section circle and a second distance, wherein the second distance is the distance from the circle center of the section circle to the connecting line, or the second distance is the distance from the intersection point of a straight line which passes through the circle center of the section circle and is vertically upward and the connecting line to the connecting line of the circle center of the section circle;
a third determining submodule, configured to determine that the connection line passes through the target area if the second distance is not greater than the radius of the cross-sectional circle; if the second distance is greater than the radius of the cross-sectional circle, it is determined that the link does not pass through the target area.
Optionally, the third obtaining module includes:
the third acquisition submodule is used for acquiring the triggered time length of the target area;
and the fourth obtaining submodule is used for obtaining the radius of the target area according to the triggered time length of the target area.
Optionally, the first determining module further includes:
the fifth acquisition submodule is used for acquiring a third distance between the first position and the second position, a fourth distance between the first position and the trigger position and a fifth distance between the second position and the trigger position;
and the first determining module is used for executing the step of determining the position relation between the connecting line and the target area according to the triggering position of the target area and the radius of the target area if the third distance is not less than any one of the fourth distance and the fifth distance.
Optionally, the shielding function module is configured to, when a connection line between the first location and the second location passes through the target area and there is no object that can collide between the first location and the second location, display the sight pattern in a non-prompt state if the sight pattern is directly opposite to the second virtual object; the sight bead pattern in the non-prompting state is used for prompting that no other virtual object except the first virtual object exists in the pointing direction.
Optionally, the apparatus further comprises a controller,
the display module is used for displaying the sight bead pattern in a prompt state if the sight bead pattern is over against the second virtual object when the connecting line between the first position and the second position does not pass through the target area and no collision object exists between the first position and the second position; the sight bead pattern in the prompt state is used to prompt that other virtual objects than the first virtual object exist in the pointing direction.
Optionally, the target area is a smoke area triggered by a smoke prop in the virtual scene being used.
In summary, in the object prompt apparatus in the virtual scene provided in this embodiment of the present application, when an application runs in a computer device, position information of a virtual object that is a shooting source and a virtual object that is a shooting target is obtained by displaying a display interface of the application, and when a connection line between the shooting source and the shooting target passes through the target area, an object prompt function is shielded. By the method, the starting states of the prompt function are different under different conditions, the object prompt function is ensured, and the interface display effect of the virtual scene with the object prompt function and the smoke area is improved.
Fig. 20 is a block diagram illustrating the structure of a computer device 2000, according to an example embodiment. The computer device 2000 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, the computer device 2000 includes: a processor 2001 and a memory 2002.
The processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2001 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2001 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 2002 may include one or more computer-readable storage media, which may be non-transitory. The memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the distance information display method in a virtual scene provided by method embodiments herein.
In some embodiments, the computer device 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002 and peripheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2004, a touch display 2005, a camera 2006, an audio circuit 2007, a positioning assembly 2008, and a power supply 2009.
The peripheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2001 and the memory 2002. In some embodiments, the processor 2001, memory 2002 and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2001, the memory 2002, and the peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2005 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2005 is a touch display screen, the display screen 2005 also has the ability to capture touch signals on or over the surface of the display screen 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 2005 may be one, providing the front panel of the computer device 2000; in other embodiments, the display screens 2005 can be at least two, each disposed on a different surface of the computer device 2000 or in a folded design; in still other embodiments, the display 2005 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 2000. Even more, the display screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 2006 is used to capture images or video. Optionally, camera assembly 2006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing or inputting the electric signals to the radio frequency circuit 2004 so as to realize voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 2000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2007 may also include a headphone jack.
The Location component 2008 is configured to locate a current geographic Location of the computer device 2000 to implement navigation or LBS (Location Based Service). The Positioning component 2008 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
A power supply 2009 is used to power the various components of the computer device 2000. The power supply 2009 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 2009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the computer apparatus 2000. For example, the acceleration sensor 2011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2001 may control the touch display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2012 can detect the body direction and the rotation angle of the computer device 2000, and the gyro sensor 2012 cooperates with the acceleration sensor 2011 to acquire the 3D motion of the user on the computer device 2000. The processor 2001 may implement the following functions according to the data collected by the gyro sensor 2012: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 2013 may be disposed on the side bezel of the computer device 2000 and/or underneath the touch screen display 2005. When the pressure sensor 2013 is disposed on the side frame of the computer device 2000, the holding signal of the user to the computer device 2000 can be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at a lower layer of the touch display screen 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2014 is used for collecting fingerprints of the user, and the processor 2001 identifies the identity of the user according to the fingerprints collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprints. Upon identifying that the user's identity is a trusted identity, the processor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 2014 may be disposed on a front, back, or side of the computer device 2000. When a physical key or vendor Logo is provided on the computer device 2000, the fingerprint sensor 2014 may be integrated with the physical key or vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the touch display 2005 according to the ambient light intensity collected by the optical sensor 2015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 according to the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also known as a distance sensor, is typically disposed on a front panel of the computer device 2000. The proximity sensor 2016 is used to capture the distance between a user and the front of the computer device 2000. In one embodiment, the touch display screen 2005 is controlled by the processor 2001 to switch from a bright screen state to a dark screen state when the proximity sensor 2016 detects that the distance between the user and the front of the computer device 2000 is gradually decreasing; when the proximity sensor 2016 detects that the distance between the user and the front of the computer device 2000 is gradually increasing, the touch display 2005 is controlled by the processor 2001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 20 is not intended to be limiting of the computer device 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3, 6, 17, or 18 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A method for prompting an object in a virtual scene is characterized in that the method is executed by a terminal and comprises the following steps:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
acquiring a first position and a second position, wherein the first position is the position of a first virtual object controlled by the terminal, and the second position is the position of a second virtual object except the first virtual object in the virtual scene;
acquiring a trigger position of the target area;
acquiring the radius of the target area;
constructing a target sphere by taking the triggering position of the target area as a sphere center, wherein the radius of the target sphere is the radius of the target area;
determining a cross-sectional circle in the target sphere, wherein the cross-sectional circle is a cut-off circle of a vertical plane where the first position and the second position are located in the target sphere;
acquiring the radius of the cross-section circle and a second distance, wherein the second distance is the distance from the center of the cross-section circle to a connecting line between the first position and the second position, or the second distance is the distance from the intersection point of a straight line which passes through the center of the cross-section circle and is vertically upward and the connecting line to the connecting line with the center of the cross-section circle;
if the second distance is not larger than the radius of the section circle, determining that the connecting line passes through the target area;
if the second distance is greater than the radius of the cross-sectional circle, determining that the connecting line does not pass through the target area;
and when a connecting line between the first position and the second position passes through the target area, shielding an object prompting function, wherein the object prompting function is used for prompting that other virtual objects except the first virtual object exist in the direction pointed by the sight bead pattern.
2. The method of claim 1, wherein the determining the position relationship between the connection line and the target area according to the trigger position of the target area and the radius of the target area comprises:
when the first position, the second position and the trigger position are in the same horizontal plane, acquiring a first distance from the trigger position to the connecting line;
if the first distance is not larger than the radius of the target area, determining that the connecting line passes through the target area;
if the first distance is larger than the radius of the target area, determining that the connecting line does not pass through the target area.
3. The method of claim 1, wherein the obtaining the radius of the target region comprises:
acquiring the triggered time length of the target area;
and acquiring the radius of the target area according to the triggered time length of the target area.
4. The method according to claim 1, wherein before determining the position relationship between the connection line and the target area according to the trigger position of the target area and the radius of the target area, the method further comprises:
acquiring a third distance between the first position and the second position, a fourth distance between the first position and the trigger position, and a fifth distance between the second position and the trigger position;
determining a position relationship between the connecting line and the target area according to the trigger position of the target area and the radius of the target area, including:
and if the third distance is not less than any one of the fourth distance and the fifth distance, determining the position relationship between the connecting line and the target area according to the trigger position of the target area and the radius of the target area.
5. The method of claim 1, wherein masking object hinting functions when a line between the first location and the second location crosses the target region comprises:
when a connecting line between the first position and the second position passes through the target area and no collision-capable object exists between the first position and the second position, if the sight bead pattern is over against the second virtual object, displaying the sight bead pattern in a non-prompting state; the sight bead pattern in the non-prompting state is used for prompting that no other virtual objects except the first virtual object exist in the pointing direction.
6. The method of claim 5, further comprising:
when the connecting line between the first position and the second position does not pass through the target area and no collision-capable object exists between the first position and the second position, if the sight bead pattern is directly opposite to the second virtual object, displaying the sight bead pattern in a prompt state; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction.
7. The method of any one of claims 1 to 6, wherein the target area is a smoke area triggered by a smoke prop in the virtual scene being used.
8. A method for prompting an object in a virtual scene is characterized in that the method is executed by a terminal and comprises the following steps:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
when a connecting line between a first position and a second position passes through the target area and no collision-capable object exists between the first position and the second position, if the sight pattern is over against a second virtual object, the sight pattern is displayed in a non-prompting state; the sight bead pattern in the non-prompting state is used for prompting that no other virtual objects except the first virtual object exist in the pointing direction; the connecting line penetrates through the target area by acquiring a trigger position of the target area, acquiring the radius of the target area, constructing a target sphere by taking the trigger position of the target area as a sphere center, determining a section circle in the target sphere, and acquiring the radius and a second distance of the section circle, wherein the second distance is determined if the second distance is not greater than the radius of the section circle; the radius of the target sphere is the radius of the target region; the cross-sectional circle is a cut-off circle of a vertical plane where the first position and the second position are located in the target sphere; the second distance is the distance from the circle center of the cross-sectional circle to the connecting line, or the second distance is the distance from the intersection point of a straight line passing through the circle center of the cross-sectional circle and going vertically upwards and the connecting line to the connecting line with the circle center of the cross-sectional circle;
when the connecting line between the first position and the second position does not pass through the target area and no collision-capable object exists between the first position and the second position, if the sight bead pattern is directly opposite to the second virtual object, displaying the sight bead pattern in a prompt state; the sight bead pattern in the prompting state is used for prompting that other virtual objects except the first virtual object exist in the pointing direction; the line does not cross the target area as determined if the second distance is greater than the radius of the cross-sectional circle;
wherein the first position is a position of the first virtual object controlled by the terminal, and the second position is a position of the second virtual object in the virtual scene other than the first virtual object.
9. An object prompting device in a virtual scene, wherein the device is applied in a terminal, and the device comprises:
the display module is used for displaying a display interface of an application program, and the display interface comprises a scene picture of a virtual scene; the virtual scene comprises a target area, wherein the target area is a space area which shields a sight line and has no collision volume; a front sight pattern is displayed in the display interface in an overlapping mode;
a first obtaining module, configured to obtain a first position and a second position, where the first position is a position of a first virtual object controlled by the terminal, and the second position is a position of a second virtual object in the virtual scene except the first virtual object;
the second acquisition module is used for acquiring the trigger position of the target area;
the third acquisition module is used for acquiring the radius of the target area;
the first determining module is used for constructing a target sphere by taking the triggering position of the target area as a sphere center, and the radius of the target sphere is the radius of the target area;
the first determining module is further configured to determine a cross-sectional circle in the target sphere, where the cross-sectional circle is a truncated circle of a vertical plane where the first position and the second position are located in the target sphere;
the first determining module is further configured to obtain a radius of the cross-sectional circle and a second distance, where the second distance is a distance from a center of the cross-sectional circle to a connection line between the first position and the second position, or the second distance is a distance from an intersection point of a straight line passing through the center of the cross-sectional circle and extending vertically upwards and the connection line to the connection line with the center of the cross-sectional circle;
the first determining module is further configured to determine that the connection line passes through the target area if the second distance is not greater than the radius of the cross-sectional circle;
the first determining module is further configured to determine that the connection line does not pass through the target area if the second distance is greater than the radius of the cross-sectional circle;
and the shielding function module is used for shielding an object prompting function when a connecting line between the first position and the second position passes through the target area, and the object prompting function is used for prompting that other virtual objects except the first virtual object exist in the direction pointed by the sight pattern.
10. The apparatus of claim 9, wherein the determining module comprises:
the first obtaining submodule is used for obtaining a first distance from the trigger position to the connecting line when the first position, the second position and the trigger position are in the same horizontal plane;
the first determining submodule is used for determining that the connecting line passes through the target area if the first distance is not larger than the radius of the target area;
if the first distance is larger than the radius of the target area, determining that the connecting line does not pass through the target area.
11. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement the object hinting method in a virtual scene as claimed in any one of claims 1 to 8.
12. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the object hinting method in a virtual scene as claimed in any one of claims 1 to 8.
CN201910890460.9A 2019-09-20 2019-09-20 Object prompting method, device, equipment and storage medium in virtual scene Active CN110585704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910890460.9A CN110585704B (en) 2019-09-20 2019-09-20 Object prompting method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910890460.9A CN110585704B (en) 2019-09-20 2019-09-20 Object prompting method, device, equipment and storage medium in virtual scene

Publications (2)

Publication Number Publication Date
CN110585704A CN110585704A (en) 2019-12-20
CN110585704B true CN110585704B (en) 2021-04-09

Family

ID=68861591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910890460.9A Active CN110585704B (en) 2019-09-20 2019-09-20 Object prompting method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN110585704B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111228809A (en) * 2020-01-09 2020-06-05 腾讯科技(深圳)有限公司 Operation method, device, equipment and readable medium of virtual prop in virtual environment
CN111784844B (en) * 2020-06-09 2024-01-05 北京五一视界数字孪生科技股份有限公司 Method and device for observing virtual object, storage medium and electronic equipment
CN112076468B (en) * 2020-09-17 2022-07-22 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN115713614A (en) * 2022-11-25 2023-02-24 立讯精密科技(南京)有限公司 Image scene construction method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140066197A1 (en) * 2012-08-31 2014-03-06 DeNA Co., Ltd. System and method for facilitating interaction with a virtual space via a touch sensitive surface
CN108704308A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Control method, device, storage medium and the electronic equipment of shooting game
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760076B (en) * 2016-02-03 2018-09-04 网易(杭州)网络有限公司 game control method and device
JP6795322B2 (en) * 2016-03-31 2020-12-02 株式会社バンダイナムコエンターテインメント Program and AR experience providing equipment
US10525355B2 (en) * 2016-11-01 2020-01-07 Htc Corporation Method, device, and non-transitory computer readable storage medium for interaction to event in virtual space
CN107617213B (en) * 2017-07-27 2019-02-19 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN110115838B (en) * 2019-05-30 2021-10-29 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for generating mark information in virtual environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140066197A1 (en) * 2012-08-31 2014-03-06 DeNA Co., Ltd. System and method for facilitating interaction with a virtual space via a touch sensitive surface
CN108704308A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Control method, device, storage medium and the electronic equipment of shooting game
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game

Also Published As

Publication number Publication date
CN110585704A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
US11221726B2 (en) Marker point location display method, electronic device, and computer-readable storage medium
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN108245893B (en) Method, device and medium for determining posture of virtual object in three-dimensional virtual environment
CN110585704B (en) Object prompting method, device, equipment and storage medium in virtual scene
CN109529319B (en) Display method and device of interface control and storage medium
WO2020043016A1 (en) Virtual carrier control method in virtual scene, computer device and storage medium
WO2019205881A1 (en) Method and apparatus for displaying information in virtual environment, device, and storage medium
CN108376424A (en) Method, apparatus, equipment and storage medium for carrying out view angle switch to three-dimensional virtual environment
CN108671543A (en) Labelled element display methods, computer equipment and storage medium in virtual scene
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
CN111124133A (en) Method, device, equipment and storage medium for danger prompt information in virtual scene
CN112044069B (en) Object prompting method, device, equipment and storage medium in virtual scene
CN111589132A (en) Virtual item display method, computer equipment and storage medium
CN111589125B (en) Virtual object control method and device, computer equipment and storage medium
CN109917910B (en) Method, device and equipment for displaying linear skills and storage medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN109821237B (en) Method, device and equipment for rotating visual angle and storage medium
CN110917616A (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN111068324A (en) Virtual object control method, device, equipment and storage medium
CN110401898B (en) Method, apparatus, device and storage medium for outputting audio data
CN109407959B (en) Virtual object control method, device and storage medium in virtual scene
CN112245912B (en) Sound prompting method, device, equipment and storage medium in virtual scene
CN113041620B (en) Method, device, equipment and storage medium for displaying position mark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018605

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant