CN111265869A - Virtual object detection method, device, terminal and storage medium - Google Patents

Virtual object detection method, device, terminal and storage medium Download PDF

Info

Publication number
CN111265869A
CN111265869A CN202010038421.9A CN202010038421A CN111265869A CN 111265869 A CN111265869 A CN 111265869A CN 202010038421 A CN202010038421 A CN 202010038421A CN 111265869 A CN111265869 A CN 111265869A
Authority
CN
China
Prior art keywords
virtual object
prop
target
virtual
interactive prop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010038421.9A
Other languages
Chinese (zh)
Other versions
CN111265869B (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010038421.9A priority Critical patent/CN111265869B/en
Publication of CN111265869A publication Critical patent/CN111265869A/en
Application granted granted Critical
Publication of CN111265869B publication Critical patent/CN111265869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device, a terminal and a storage medium for detecting a virtual object, and belongs to the technical field of terminals. The method comprises the following steps: responding to the transmitting operation of the target interactive prop, and controlling a first virtual object controlled by an end user to transmit the target interactive prop in a virtual scene; in response to detecting that any second virtual object is included in the sensing range of the target interaction prop, determining a first position of the second virtual object; and according to the first position, displaying the position identification of the second virtual object in a map display area of the virtual scene, wherein the map display area is used for indicating the position of the first virtual object in a virtual space corresponding to the virtual scene. The user can timely determine the position of the second virtual object to realize the detection of the second virtual object, so that the first virtual object can find the second virtual object in a visual observation mode, and smoothly perform confrontation type interaction with the second virtual object, thereby improving the game experience of the user.

Description

Virtual object detection method, device, terminal and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for detecting a virtual object.
Background
With the development of multimedia technology and the diversification of terminal functions, more and more games can be played on the terminal. The shooting game is a more popular game, after the terminal starts a game program, a virtual scene can be displayed in an interface of the game program, a virtual object controlled by a current terminal user is displayed in the virtual scene, and the virtual object can perform antagonistic interaction with other virtual objects through the interactive prop.
At present, the manufacturing technology of virtual scenes of shooting games tends to be mature, the space in the virtual scenes is larger and larger, the virtual scenes can comprise various terrains such as mountains, rivers, airports, stations and buildings, and the virtual objects can perform antagonistic interaction with other virtual objects depending on different terrains.
However, as the terrain becomes more complex, the number of obstacles increases, so that when the virtual object controlled by the user chases other virtual objects, the virtual object is difficult to find and chase other virtual objects, and particularly, the virtual object is easy to chase and lose inside a building. Due to the lack of an effective method for detecting other virtual objects in a virtual scene in the related art, the virtual object controlled by the user cannot smoothly perform antagonistic interaction with other virtual objects, thereby causing the problem of poor user experience.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for detecting a virtual object, wherein a user can timely determine the position of a second virtual object to realize the detection of the second virtual object, so that the first virtual object can find the second virtual object through a visual observation mode, and smoothly perform countertype interaction with the second virtual object, thereby improving the game experience of the user. The technical scheme is as follows:
in one aspect, a method for detecting a virtual object is provided, where the method includes:
responding to the launching operation of a target interaction prop, controlling a first virtual object controlled by an end user to launch the target interaction prop in a virtual scene, wherein the target interaction prop is used for detecting a second virtual object in the virtual scene except the first virtual object;
in response to detecting that any second virtual object is included in the sensing range of the target interaction prop, determining a first position of the second virtual object;
and displaying the position identifier of the second virtual object in a map display area according to the first position, wherein the map display area is used for indicating the position of the first virtual object in a virtual space corresponding to the virtual scene.
In an optional implementation manner, after the indication identifier corresponding to the movement track is displayed in the virtual scene, the method further includes:
and if the indication mark meets a second display canceling condition, canceling the display of the indication mark in the virtual scene.
In an optional implementation manner, the sensing range of the target interactive prop is a spherical three-dimensional space centered at a target position where the target interactive prop is located.
In one aspect, an apparatus for detecting a virtual object is provided, the apparatus comprising:
the control module is used for responding to the transmitting operation of the target interactive prop, controlling a first virtual object controlled by a terminal user to transmit the target interactive prop in a virtual scene, wherein the target interactive prop is used for detecting a second virtual object in the virtual scene except the first virtual object;
the detection module is used for responding to the fact that any second virtual object is included in the sensing range of the target interaction prop, and determining a first position of the second virtual object;
and a display module, configured to display, according to the first position, a position identifier of the second virtual object in a map display area of the virtual scene, where the map display area is used to indicate a position of the first virtual object in a virtual space corresponding to the virtual scene.
In an optional implementation manner, the control module is further configured to display a launching option in a triggerable state in response to the target interactive prop being in an activated state, where the launching option is used for launching the target interactive prop; and responding to the triggering operation of the transmitting option, and controlling the first virtual object to transmit the target interactive prop in a virtual scene.
In an optional implementation manner, the control module is further configured to acquire a second position of the first virtual object in the virtual scene and a third position indicated by a view indicator in the virtual scene, where the view indicator is used to indicate a view direction of the first virtual object; determining a launching trajectory of the target interactive prop based on the second position and the third position; and controlling the target interaction prop to move to a target position in the virtual scene along the launching track.
In an optional implementation manner, the control module is further configured to determine a first transmission direction of the target interactive prop according to the second position and the third position; and determining the launching track of the target interactive prop according to the first launching direction, the initial speed of the target interactive prop and the gravity direction.
In an optional implementation manner, the control module is further configured to predict a second emission direction of the target interactive prop according to the second position, the third position, the moving speed of the first virtual object, and the moving direction of the first virtual object; and predicting the launching track of the target interaction prop according to the second launching direction, the moving speed of the first virtual object, the moving direction of the first virtual object and the initial speed and the gravity direction of the target interaction prop.
In an optional implementation, the apparatus further includes:
the display module is also used for displaying a prop selection interface comprising at least one interactive prop;
the display module is further used for responding to the selection operation of any interactive prop and taking the selected interactive prop as the target interactive prop;
and the state setting module is used for setting the target interaction prop into an activation state if the target interaction prop meets the activation condition.
In an optional implementation manner, the display module is further configured to highlight the selected interactive prop and display introduction information of the interactive prop on the prop selection interface in response to a selection operation of any interactive prop; and responding to the triggering operation of the determined option in the item selection interface, and taking the selected interactive item as the target interactive item.
In an alternative implementation, the activation condition is any one of the following:
the time that the target interactive prop is destroyed at the last time exceeds the first target time length;
the time that the target interactive prop is transmitted for the last time exceeds a second target duration;
the number of the target interactive props which can be transmitted is not zero.
In an optional implementation manner, the display module is further configured to display, in a map display area of the virtual scene, a location identifier of the target interaction prop and an induction range identifier of the target interaction prop, where the induction range identifier is a circle with the location identifier of the target interaction prop as a center of circle and an induction distance as a radius.
In an optional implementation manner, the display module is further configured to cancel displaying the target interaction prop in the virtual scene and cancel displaying the location identifier of the target interaction prop and the induction range identifier of the target interaction prop in the map display area if the target interaction prop meets a first display cancellation condition.
In an alternative implementation, the first cancellation display condition is any one of:
the target interaction prop is attacked by any virtual object in the virtual scene;
the time that the target interactive prop is transmitted for the last time exceeds a third target duration;
the first virtual object again launches the target interaction prop.
In an optional implementation, the apparatus further includes:
the acquisition module is used for acquiring the moving track of the second virtual object in the sensing range;
the display module is further configured to display an indication identifier corresponding to the movement track in the virtual scene.
In an optional implementation manner, the display module is further configured to cancel displaying the indication identifier in the virtual scene if the indication identifier meets a second display cancellation condition.
In an optional implementation manner, the sensing range of the target interactive prop is a spherical three-dimensional space centered at a target position where the target interactive prop is located.
In another aspect, a terminal is provided, where the terminal includes a processor and a memory, and the memory is used to store at least one program code, where the at least one program code is loaded and executed by the processor to implement the operations performed in the detection method for a virtual object in the embodiments of the present application.
In another aspect, a storage medium is provided, where at least one program code is stored in the storage medium, and the at least one program code is used to execute the method for detecting a virtual object in the embodiment of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the target interaction prop used for detecting the second virtual object is transmitted through the first virtual object, so that the target interaction prop can detect the second virtual object in the sensing range, and the position identification of the second virtual object is displayed in the map display area, so that a user can timely determine the position of the second virtual object, the second virtual object can be detected, the first virtual object can find the second virtual object through a visual observation mode, the confrontation type interaction with the second virtual object is smoothly carried out, and the game experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic implementation environment diagram of a detection method for a virtual object according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for detecting a virtual object according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a fixture selection interface provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a target interaction prop provided in accordance with an embodiment of the present application, not in an activated state;
FIG. 5 is a schematic diagram of a target interaction prop in an activated state according to an embodiment of the present application;
FIG. 6 is a schematic diagram of determining an emission trajectory according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a sensing range provided in accordance with an embodiment of the present application;
FIG. 8 is a schematic diagram of an indicator corresponding to a movement track according to an embodiment of the present application;
FIG. 9 is a schematic illustration of a map display area provided in accordance with an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a display of a location identifier of a second virtual object according to an embodiment of the present application;
FIG. 11 is a flowchart of another method for detecting a virtual object according to an embodiment of the present application;
FIG. 12 is a block diagram of an apparatus for detecting a virtual object according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Hereinafter, terms related to the present application are explained.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene battle by training, or a Non-user Character (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user may also control the virtual object to perform antagonistic interaction with other virtual objects through a virtual weapon, for example, the virtual weapon may be a throwing virtual weapon such as a grenade, a mine bundle, a viscous grenade (abbreviated as "viscous grenade"), or a shooting virtual weapon such as a machine gun, a pistol, or a rifle, and the type of the virtual weapon is not specifically limited in the present application.
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic implementation environment diagram of a virtual object detection method according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes: a terminal 110 and a server 120.
The terminal 110 is installed and operated with an application program supporting a virtual scene. The application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online Battle Arena game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type live game. The terminal 110 may be a terminal used by a user, and the user uses the terminal 110 to operate a virtual object located in a virtual scene for activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
The terminal 110 may be connected to the server 120 through a wireless network or a wired network.
The server 120 may include at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 120 is used to provide background services for applications that support virtual scenarios. Alternatively, the server 120 may undertake primary computational tasks and the terminal 110 may undertake secondary computational tasks; alternatively, the server 120 undertakes the secondary computing work and the terminal 110 undertakes the primary computing work; alternatively, the server 120 and the terminal 110 perform cooperative computing by using a distributed computing architecture.
Optionally, the virtual object controlled by the terminal 110 (hereinafter referred to as a first virtual object) and the virtual object controlled by the other terminal 110 (hereinafter referred to as a second virtual object) are in the same virtual scene, and the first virtual object can interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be in a hostile relationship, for example, the first virtual object and the second virtual object may belong to different teams and organizations, and the hostile virtual objects may interact opposedly on land in a manner of shooting each other.
In an exemplary scenario, the terminal 110 controls the first virtual object to chase the second virtual object, when the first virtual object loses the second virtual object, the interactive prop for detecting the second virtual object may be transmitted, when the second virtual object exists in the sensing range of the interactive prop, the terminal 110 may display the location identifier of the second virtual object in the map display area, and the first virtual object may determine the location of the second virtual object according to the location identifier, thereby finding the second virtual object. Correspondingly, when the first virtual object is chased by the second virtual object, the interactive prop for detecting the second virtual object can be transmitted, and the position of the second virtual object is determined in real time, so that the first virtual object can adjust the escape route according to the position of the second virtual object, and the purpose of smooth escape is achieved.
Terminal 110 may generally refer to one of a plurality of terminals, and the device types of terminal 110 include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, a laptop portable computer, and a desktop computer. For example, the terminal 110 may be a smart phone, or other handheld portable gaming device.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a method for detecting a virtual object according to an embodiment of the present application, and as shown in fig. 2, the method is applied to a terminal in the embodiment of the present application as an example for description. The terminal may be the terminal 110 shown in fig. 1, and the method for detecting the virtual object includes the following steps:
201. and the terminal displays a prop selection interface comprising at least one interactive prop.
In the embodiment of the application, an application program supporting a virtual scene may be run in the terminal. Before the terminal displays the virtual scene, the terminal may display a prop selection interface. The prop selection interface comprises at least one interactive prop, and the at least one interactive prop is an interactive prop which can be used in the virtual scene. The prop selection interface can occupy the display page of the whole terminal for display. Optionally, the terminal may also suspend the item selection interface on the virtual scene for display when displaying the virtual scene.
For example, fig. 3 is a schematic diagram of a road accessory selection interface provided in an embodiment of the present application. Referring to fig. 3, the item selection interface 310 is an exemplary interface in a multiplayer gunfight survivorship game, and a user can select any interactive item 311 in the item selection interface 310 in a preparation stage before starting the game. At the end of the preparation phase, i.e., after the countdown period has ended, the user can no longer change the interactive item through item selection interface 310. In addition, when the first virtual object controlled by the terminal user and the virtual objects controlled by other terminal users play in a team form, the interactive properties selected by the virtual objects in the same team are different, that is, each interactive property can be selected by only one virtual character in the team. Wherein, interactive stage property can include: a tracking chip for tracking, a treatment chip for treatment, a stealing chip for stealing, a robot chip for setting up an intelligent robot, and the like. Optionally, the item selection interface further includes a profile content area 312 for displaying information about the interactive item, a confirmation option 313 for confirming the selected interactive item, a close option 314 for closing the item selection interface, and a countdown display area 315. The terminal cancels the display of the prop selection interface 310 after receiving the triggering operation of the closing option 314, or cancels the display of the prop selection interface 310 after the countdown is finished. Of course, the item selection interface 310 may also be configured to display when a corresponding option is triggered during the game, such as an item replacement option. Facilities, such as a property store and the like, capable of replacing the interactive property can also be set in the virtual scene, and the property selection interface 310 is triggered and displayed through the facilities, so that the interactive property can be replaced. This is not particularly limited by the embodiments of the present application.
202. And the terminal responds to the selection operation of any interactive prop and takes the selected interactive prop as a target interactive prop.
In this embodiment, after the terminal displays the prop selection interface, the terminal may respond to a selection operation of any interactive prop, highlight the selected interactive prop and display introduction information of the interactive prop on the prop selection interface, where the unselected interactive props may be not highlighted, and the interactive props that cannot be selected may be displayed in gray scale. The terminal can respond to the triggering operation of the determined option in the item selection interface and take the selected interactive item as a target interactive item.
For example, the end user selects a tracking chip for tracking in the item selection interface shown in fig. 3, the tracking chip is highlighted, and the treatment chip is displayed in gray scale because the treatment chip is selected by other users in the team, which indicates that the end user cannot select the treatment chip, and other interactive items are not highlighted. When the tracking chip is selected, the terminal displays the relevant information of the treatment chip in the profile content area 312 of the item selection interface, i.e. the left area of the item selection interface in fig. 3. The terminal user can click the confirmation button 313 in fig. 3 to use the tracking chip as the target interactive prop used in the virtual scene, wherein the user can randomly replace the selected interactive prop before clicking the confirmation button, if the user clicks the confirmation button, the selected interactive prop is used as the target interactive prop, after the user closes the prop selection interface by clicking the closing button, the prop selection interface can be opened again by the opening button for reselection, but after the countdown is finished, the user cannot open the prop selection interface again by the opening button for reselection. Optionally, the user may reselect the interactive prop through a facility set in the virtual scene.
203. And if the target interactive prop meets the activation condition, setting the target interactive prop to be in an activation state by the terminal.
In this embodiment of the application, after determining the target interactive prop, the terminal may display a launch option for launching the target interactive prop, where the launch option is displayed in a gray scale, that is, in an un-triggerable state, to indicate that the target interactive prop is in an inactivated state. The terminal can activate the target interactive prop and set the target interactive prop in an activated state when the target interactive prop meets the activation condition. The activation condition of the target interactive prop may be that the time that the target interactive prop is destroyed last time exceeds a first target duration, or the activation condition of the target interactive prop may be that the time that the target interactive prop is launched last time exceeds a second target duration, or the number of launchable objects of the target interactive prop is not zero. The embodiment of the application does not limit the activation condition of the target interactive prop. The activation state, the deactivation state and the activation condition are set for the target interaction prop, so that the first virtual object does not excessively depend on the function provided by the target interaction prop in the game process, and different game effects can be generated due to the difference of the use time and the deployment position of the target interaction prop, so that the interest and the unknown property of the game are improved.
For example, taking the activation condition of the target interactive prop as the time that the target interactive prop is destroyed last time and exceeds the first target time length. The target interactive property can be destroyed, after the target interactive property is destroyed, the terminal starts timing, and when the timing reaches the first target time, the target interactive property meets the activation condition and can be emitted again. The first target time period may be 30 seconds, 45 seconds, 60 seconds, 90 seconds, or the like, and the first target time period is not particularly limited in the embodiment of the application, and may also be referred to as a cooling time of the target interactive prop. The terminal starts timing from the destruction of the target interactive property, so long as the target interactive property is not destroyed, the target interactive property cannot meet the activation condition, the first virtual object cannot launch the target interactive property again, the application of the target interactive property is limited, and the situation that the game balance is damaged due to the fact that the function of the target interactive property is too powerful is avoided.
For another example, the activation condition of the target interactive prop is taken as the time that the target interactive prop is transmitted for the last time and exceeds the second target time length. The terminal can start timing after the target interactive prop is transmitted, and when the timing reaches the second target time length, the target interactive prop meets the activation condition and can be transmitted again. Optionally, when the target interactive prop is transmitted again, the terminal may cancel displaying the transmitted target interactive prop, that is, the terminal only displays the recently transmitted target interactive prop; the terminal also can not cancel the display of the transmitted target interactive props, namely the terminal can simultaneously display a plurality of target interactive props. The second target time period may be 30 seconds, 45 seconds, 60 seconds, 90 seconds, or the like, and the second target time period is not particularly limited in the embodiment of the application, and may also be referred to as a cooling time of the target interactive prop. Because the terminal starts timing after the target interaction props are transmitted, at least two target interaction props can be arranged on the first virtual object when the existing time of the target interaction props is longer than that of the second target, so that the first virtual object can set the target interaction props in important places to realize detection of the second virtual object, the tracks of other second virtual objects are known in advance, the first virtual object occupies an active right, and the strategy of a game is improved.
For another example, the activation condition of the target interactive prop is taken as an example that the number of the transmittable items of the target interactive prop is not zero. The terminal can control a first virtual object controlled by a user to transmit at least one target interactive prop, if the first virtual object can only transmit one target interactive prop, the maximum transmittable quantity of the target interactive props is 1, and when the transmittable quantity of the target interactive props is not zero, the target interactive props meet the activation condition. Because the first virtual object can only transmit one target interactive prop, the importance of the target interactive prop is greatly improved, so that various strategy schemes based on the target interactive prop are derived, the strategy of the game is increased, and the game experience of the user is improved. If the first virtual object can transmit two or more target interactive props, the first virtual object transmits one target interactive prop, the number of the transmittable objects of the target interactive props is reduced by 1, and the target interactive props meet the activation condition as long as the number of the transmittable objects of the target interactive props is not zero. Because the first virtual object can transmit a plurality of target interaction props, the first virtual object can use the target interaction props for attack or defense and can also be used for monitoring important places, thereby increasing the strategy of games and improving the game experience of users. Optionally, the terminal may recover the transmittable number of 1 target interactive props for the first virtual object every fourth target duration. Because the number of the target interactive props has recoverability, the number of the target interactive props owned by each virtual object has unknown property, thereby improving the game unknown property, increasing the game strategy and improving the game experience of users.
204. And the terminal responds to the target interactive prop being in the activated state, and displays a transmitting option in a triggerable state, wherein the transmitting option is used for transmitting the target interactive prop.
In this embodiment of the application, when the target interactive prop is in an activated state, the terminal may highlight the transmission option, and the user may transmit the target interactive prop by clicking the transmission option. When the terminal displays the launching option, a prompt sound effect can be played simultaneously, or prompt information can be displayed to prompt that the target interactive prop is in an activated state, and the launching option is in a triggerable state.
For example, referring to fig. 4 and 5, fig. 4 is a schematic diagram illustrating a target interaction prop according to an embodiment of the present application not being activated, where the launching option 401 is a gray scale display and cannot be triggered. Fig. 5 is a schematic diagram of a target interactive prop in an activated state according to an embodiment of the present application, where transmission option 401 is highlighted and in a triggerable state. After the launching option 401 is triggered, the terminal sets the target interactive prop to be in an inactivated state, the launching option 401 is displayed in a gray scale mode, and the terminal highlights the launching option 401 until the target interactive prop is in an activated state again, if the cooling time of the target interactive prop is over.
205. And the terminal responds to the triggering operation of the transmitting option and controls the first virtual object to transmit the target interaction prop in the virtual scene, and the target interaction prop is used for detecting a second virtual object in the virtual scene except the first virtual object.
In the embodiment of the present application, a plurality of virtual objects may be included in a virtual scene, and herein, other virtual objects except for the first virtual object controlled by the end user are collectively referred to as a second virtual object, and the second virtual object may be a virtual object controlled by another end user, and may also be an AI object or an NPC object. The target interactive prop may be an interactive prop for detecting a second virtual object in at least one interactive prop. When detecting the trigger operation of the launching option, the terminal can respond to the trigger operation to control the first virtual object to launch the target interactive prop in the virtual scene.
In an optional implementation manner, the terminal may determine a launching track of the target interactive prop, and then control the first virtual object to launch the target interactive prop along the launching track. Correspondingly, the steps can be as follows: the terminal acquires a second position of the first virtual object in the virtual scene and a third position indicated by the visual angle indication mark in the virtual scene, wherein the visual angle indication mark is used for indicating the visual angle direction of the first virtual object. The terminal can determine the launching track of the target interactive prop based on the second position and the third position. The terminal can control the target interactive prop to move to a target position in the virtual scene along the launching track. The second position of the first virtual object in the virtual scene is a three-dimensional coordinate of the first virtual object in a virtual space corresponding to the virtual scene, the view direction indicated by the view indication identifier may be any direction in the virtual scene, the view direction is a ray with the first virtual object as an origin, and a position where the ray intersects with an obstacle in the virtual scene is the third position. If the ray does not intersect with the obstacle in the virtual scene, or the distance between the intersection position and the first virtual object exceeds the target distance threshold, the terminal may use the three-dimensional coordinates of the point on the ray, which is away from the first virtual object by the target distance threshold, as the third position.
In an optional implementation manner, the terminal may determine the launching trajectory of the target interactive prop according to a preset initial speed of the target interactive prop and a gravity direction in a virtual space corresponding to the virtual scene. Correspondingly, the step of determining the launching trajectory of the target interactive prop by the terminal based on the second position and the third position may be: and the terminal determines a first transmitting direction of the target interactive prop according to the second position and the third position. And the terminal determines the launching track of the target interactive prop according to the first launching direction, the initial speed of the target interactive prop and the gravity direction. The first transmitting direction is a direction in which the second position points to the third position. The emission trajectory may be a straight line or a parabola, in relation to the first emission direction: if the included angle between the first emission direction and the gravity direction is smaller than a target angle threshold value, the emission track is a straight line, and if the included angle between the first emission direction and the gravity direction is larger than the target angle threshold value, the emission track is a parabola. The target angle threshold may be 30 °, 45 °, 60 °, and the like, which is not limited in this application. Because the launching track is determined based on the second position of the first virtual object and the third position indicated by the visual angle indication identification, the user can launch the target interaction prop to an ideal position according to the launching track, and the detection of the second virtual object is realized.
For example, the terminal may control the first virtual object to launch the target interactive prop in a casting manner, where the launching trajectory is a casting trajectory of the first virtual object, and the casting trajectory is a parabola. The terminal can also control the first virtual object to transmit the target interaction prop through a transmitter, wherein the transmitter can be a gun or other props. Referring to fig. 6, fig. 6 is a schematic diagram for determining an emission trajectory according to an embodiment of the present application. In fig. 6, the position of the palm of the first virtual object is taken as the second position 601, the position in the virtual space indicated by the visual angle indicator is taken as the third position 602, the direction indicated by the dotted line in the figure is the first emitting direction 603, the gravity direction is perpendicular to the first emitting direction 603 and points to the ground in the virtual space, and the terminal calculates the emitting trajectory 604, i.e. the parabola in fig. 6, according to the preset initial velocity, the gravity direction and the first emitting direction 603.
In a possible implementation manner, the first virtual object may transmit the target interactive prop during movement, at this time, the second position and the third position are instantaneous positions obtained when the terminal detects that the transmission option is triggered, and the terminal may predict the transmission trajectory of the target interactive prop by combining the movement direction and the movement speed of the first virtual object. Correspondingly, the step of determining the launching trajectory of the target interactive prop by the terminal based on the second position and the third position may be: and the terminal predicts a second emission direction of the target interactive prop according to the second position, the third position, the moving speed of the first virtual object and the moving direction of the first virtual object. And the terminal predicts the launching track of the target interactive prop according to the second launching direction, the moving speed of the first virtual object, the initial speed of the target interactive prop in the moving direction of the first virtual object and the gravity direction. Optionally, the terminal may display the predicted launch trajectory when the user presses the launch option, and control the target interactive prop to move along the launch trajectory when the user releases the launch option, and finally move to the target position in the virtual scene. When the launching track is predicted, the moving speed and the moving direction of the first virtual object are referred, so that the launching speed of the target interactive prop is the speed obtained after the preset initial speed and the moving speed are superposed, the obtained launching track is predicted, the launching track in the real world is more consistent, and the reality sense of the game is improved.
206. And the terminal responds to the fact that any second virtual object is included in the sensing range of the target interactive prop, and determines a first position of the second virtual object.
In this embodiment, the target interactive prop has a certain sensing range, and the sensing range is a spherical three-dimensional space centered on a target position where the target interactive prop is located. When any second virtual object appears in the spherical three-dimensional space, namely when the terminal detects any second virtual object in the sensing range of the target interaction prop, the terminal can determine the first position of the second virtual object. The first position is a three-dimensional coordinate of the second virtual object in a virtual space corresponding to the virtual scene.
In an optional implementation manner, after the target interactive prop is launched, the detection of the virtual objects within the sensing range is started, and if any second virtual object is detected, the first position of the second virtual object is determined. Or the terminal can determine the real-time distance between the virtual object near the target interactive prop and the target interactive prop in real time, and if the distance between any second virtual object and the target interactive prop is smaller than the sensing distance of the target interactive prop, the terminal determines the first position of the second virtual object.
In a possible implementation manner, the detection of the virtual objects within the sensing range is started only after the target interactive prop reaches the target position in the virtual scene, and if any second virtual object is detected, the first position of the second virtual object is determined. Or the terminal can determine the real-time distance between a virtual object near the target interactive prop and the target interactive prop in real time after the target interactive prop reaches the target position, and if the distance between any second virtual object and the target interactive prop is smaller than the sensing distance of the target interactive prop, the terminal determines the first position of the second virtual object.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a sensing range provided according to an embodiment of the present application. In fig. 7, after the target interactive prop 701 reaches the target position in the virtual scene, the detection of the virtual object in the sensing range is started, the sensing range of the target interactive prop 701 is exemplarily shown by a dotted line in fig. 7, the radius of the sensing range is the sensing distance of the target interactive prop, and the sensing range may be larger or smaller.
In an optional implementation manner, after determining the first position of the second virtual object, the terminal may further obtain a moving track of the second virtual object within the sensing range, and display an indication identifier corresponding to the moving track in the virtual scene. The indication mark may be a footprint, an arrow, a sign, etc., which is not particularly limited in the embodiments of the present application. In addition, if the indication mark meets the second cancel display condition, the terminal may cancel displaying the indication representation in the virtual scene. The second cancel display condition may be that the target interaction prop is canceled to be displayed, or the second cancel display condition may be that the second virtual object indicated by the indication identifier is eliminated, or the second cancel display condition may be that the time of existence of the indication identifier exceeds a fifth target time length.
For example, referring to fig. 8, fig. 8 is a schematic diagram of an indication identifier corresponding to a movement trajectory provided in this embodiment, when a second virtual object enters a sensing range of a target interaction prop transmitted by a first virtual object, a terminal acquires a first position of the second virtual object, if the second virtual object moves in the sensing range, the terminal acquires a position of the second virtual object once every certain time, for example, 1 second, the terminal determines the movement trajectory of the second virtual object according to the acquired multiple positions, and a footprint is displayed at each position as the indication identifier 801.
In a possible implementation manner, the terminal may display the virtual scene, and simultaneously display a map of the virtual space corresponding to the virtual scene in a map display area of the virtual scene, where the map display area is used to indicate a position of the first virtual object in the virtual space corresponding to the virtual scene, the map display area may display all maps of the virtual space, or may display a local map of the virtual space, and optionally, the user may adjust a range of the map displayed in the map display area by using a zoom button of the map display area. The terminal can display the position identification of the target interaction prop and the induction range identification of the target interaction prop in a map display area. The induction range mark is a circle which takes the position mark of the target interactive prop as a circle center and the induction distance as a radius.
For example, referring to fig. 9, fig. 9 is a schematic view of a map display area provided according to an embodiment of the present application. The map display area 910 is suspended above the virtual scene at the upper right corner of fig. 9, and does not change its position with the change of the virtual scene, but only changes its displayed content. Position mark 911 of target interactive prop 701 is a highlighted dot, and sensing range mark 912 is a circle with the dot as a radius.
207. And the terminal displays the position identification of the second virtual object in a map display area according to the first position, wherein the map display area is used for indicating the position of the first virtual object in a virtual space corresponding to the virtual scene.
In the embodiment of the application, after determining the first position of the second virtual object, the terminal may represent the position of the second virtual object by a position identifier in the map display area.
For example, referring to fig. 10, fig. 10 is a schematic diagram illustrating a position identifier of a second virtual object according to an embodiment of the present application. In fig. 10, the sensing range of the target interactive prop includes 3 second virtual objects, and the terminal highlights position identifiers 1001 of the 3 second virtual objects in the map display area. Optionally, the terminal may further update the position of the position identifier 1001 of each second virtual object in the map display area 910 in real time according to the real-time position of the second virtual object.
It should be noted that the target interactive prop does not exist all the time, and if the target interactive prop meets the first display cancellation condition, the terminal may cancel displaying the target interactive prop in the virtual scene, and cancel displaying the location identifier of the target interactive prop and the induction range identifier of the target interactive prop in the map display area. Wherein, the first cancel display condition may be: the target interactive prop is attacked by any virtual object in the virtual scene; or the time that the target interactive prop is transmitted for the last time exceeds the third target time length; alternatively, the first virtual object launches the target interaction prop again.
For example, the first cancel display condition is taken as an example that the target interactive prop is attacked by any virtual object in the virtual scene. The target interaction prop can be destroyed by the attack of the virtual object, and the attack mode can be that the attack is carried out through props such as firearms, grenades and the like. Optionally, when any virtual object in the virtual scene attacks the target interactive prop, the target interactive prop is destroyed; or when any second virtual object in the virtual scene attacks the target interactive prop, the target interactive prop is destroyed; or when a virtual object belonging to a different team from the first virtual object in the virtual scene attacks the target interactive prop, the target interactive prop is destroyed, and at this time, the first virtual object and the virtual object belonging to the same team as the first virtual object cannot attack the target interactive prop. In addition, the target interactive prop can be removed by any virtual object, and the removal operation can also be regarded as an attack on the target interactive prop.
For another example, the first cancel display condition is taken as an example that the time that the target interactive prop is transmitted last time exceeds the third target time length. The target interactive prop has a time limit of existence, beyond which the target interactive prop is removed from the virtual scene and is no longer displayed in the virtual scene. The time period of existence is timed from when the target interactive prop is launched until a third target duration position is reached. If the target interactive prop is not destroyed within the third target time length, canceling the display of the target interactive prop when the third target time length is reached; and if the target interactive prop is destroyed within the third target duration, the target interactive prop is not timed any more.
For another example, the first cancel display condition is taken as an example that the first virtual object transmits the target interactive prop again. In this case, the first virtual object can only transmit one target interaction prop at a time, and when the first virtual object transmits a new target interaction prop, the transmitted target interaction prop is removed from the virtual scene and is no longer displayed in the virtual scene.
It should be noted that, the foregoing step 201 to step 207 are optional implementations of the method for detecting a virtual object provided in the embodiment of the present application, and other optional implementations may implement the method for detecting a virtual object. For example, referring to fig. 11, fig. 11 is a flowchart of another method for detecting a virtual object according to an embodiment of the present disclosure. The terminal firstly displays a prop selection interface and determines a target interactive prop; then, when the target interaction prop is in an activated state, highlighting the emission option, and when the target interaction prop is not in the activated state, displaying the emission option in a gray scale; then judging whether the transmitting option is triggered; then transmitting the target interactive prop; then judging whether a second virtual object exists in the sensing range of the target interactive prop; if yes, highlighting the position identification in the map display area; then judging whether the first virtual object moves to the position indicated by the position identification; if the first virtual object moves to the position correspondingly, displaying an indication mark corresponding to the movement track of the second virtual object; and judging whether the target interactive prop meets a first display canceling condition, and canceling the display of the target interactive prop if the first display canceling condition is met.
In the embodiment of the application, the target interaction prop used for detecting the second virtual object is transmitted through the first virtual object, so that the target interaction prop can detect the second virtual object in the sensing range, and the position identification of the second virtual object is displayed in the map display area, so that a user can timely determine the position of the second virtual object, the second virtual object can be detected, the first virtual object can find the second virtual object through a visual observation mode, the confrontation type interaction with the second virtual object is smoothly carried out, and the game experience of the user is improved. .
Fig. 12 is a block diagram of an apparatus for detecting a virtual object according to an embodiment of the present application. The apparatus is used for executing the steps when the detection method of the virtual object is executed, referring to fig. 12, and the apparatus includes: a control module 1201, a detection module 1202, and a display module 1203.
A control module 1201, configured to control, in response to a launching operation of a target interactive prop, a first virtual object controlled by a terminal user to launch the target interactive prop in a virtual scene, where the target interactive prop is used to detect a second virtual object in the virtual scene, the second virtual object being other than the first virtual object;
a detection module 1202, configured to determine a first position of a second virtual object in response to detecting that any second virtual object is included in a sensing range of the target interaction prop;
a display module 1203, configured to display, according to the first position, a position identifier of the second virtual object in a map display area, where the map display area is used to indicate a position of the first virtual object in a virtual space corresponding to the virtual scene.
In an optional implementation manner, the control module 1201 is further configured to, in response to that the target interactive prop is in an activated state, display a launching option in a triggerable state, where the launching option is used for launching the target interactive prop; and controlling the first virtual object to transmit the target interactive prop in the virtual scene in response to the triggering operation of the transmitting option.
In an optional implementation manner, the control module 1201 is further configured to acquire a second position of the first virtual object in the virtual scene and a third position indicated by the view indication identifier in the virtual scene, where the view indication identifier is used to indicate a view direction of the first virtual object; determining a launching track of the target interactive prop based on the second position and the third position; and controlling the target interactive prop to move to a target position in the virtual scene along the launching track.
In an optional implementation manner, the control module 1201 is further configured to determine a first transmission direction of the target interactive prop according to the second position and the third position; and determining the launching track of the target interactive prop according to the first launching direction, the initial speed of the target interactive prop and the gravity direction.
In an optional implementation manner, the control module 1201 is further configured to predict a second launching direction of the target interactive prop according to the second position, the third position, the moving speed of the first virtual object, and the moving direction of the first virtual object; and predicting the launching track of the target interactive prop according to the second launching direction, the moving speed of the first virtual object, the initial speed of the target interactive prop in the moving direction of the first virtual object and the gravity direction.
In an optional implementation, the apparatus further comprises:
the display module 1203 is further configured to display a prop selection interface including at least one interactive prop;
the display module 1203 is further configured to respond to a selection operation of any interactive prop, and use the selected interactive prop as a target interactive prop;
and the state setting module is used for setting the target interactive prop into an activation state if the target interactive prop meets the activation condition.
In an optional implementation manner, the display module 1203 is further configured to highlight the selected interactive prop and display introduction information of the interactive prop on a prop selection interface in response to a selection operation of any interactive prop; and responding to the trigger operation of the determined option in the item selection interface, and taking the selected interactive item as a target interactive item.
In an alternative implementation, the activation condition is any one of the following:
the time that the target interactive prop is destroyed at the last time exceeds the first target time length;
the time that the target interactive prop is transmitted for the last time exceeds a second target duration;
the number of the target interactive props which can be transmitted is not zero.
In an optional implementation manner, the display module 1203 is further configured to display a location identifier of the target interaction prop and an induction range identifier of the target interaction prop in a map display area of the virtual scene, where the induction range identifier is a circle with the location identifier of the target interaction prop as a center of the circle and an induction distance as a radius.
In an optional implementation manner, the display module 1203 is further configured to cancel displaying the target interaction prop in the virtual scene and cancel displaying the position identifier of the target interaction prop and the induction range identifier of the target interaction prop in the map display area if the target interaction prop meets the first display cancellation condition.
In an alternative implementation, the first cancel display condition is any one of:
the target interactive prop is attacked by any virtual object in the virtual scene;
the time that the target interactive prop is transmitted for the last time exceeds the third target time length;
the first virtual object again launches the target interaction prop.
In an optional implementation, the apparatus further comprises:
the acquisition module is used for acquiring the moving track of the second virtual object in the sensing range;
the display module 1203 is further configured to display an indication identifier corresponding to the movement track in the virtual scene.
In an optional implementation manner, the display module 1203 is further configured to cancel displaying the indication identifier in the virtual scene if the indication identifier meets the second cancellation display condition.
In an optional implementation manner, the sensing range of the target interactive prop is a spherical three-dimensional space centered at a target position where the target interactive prop is located.
In the embodiment of the application, the target interaction prop used for detecting the second virtual object is transmitted through the first virtual object, so that the target interaction prop can detect the second virtual object in the sensing range, and the position identification of the second virtual object is displayed in the map display area, so that a user can timely determine the position of the second virtual object, the second virtual object can be detected, the first virtual object can find the second virtual object through a visual observation mode, the confrontation type interaction with the second virtual object is smoothly carried out, and the game experience of the user is improved.
It should be noted that: in the above embodiment, when the detection apparatus for a virtual object runs an application program, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the functions described above. In addition, the detection apparatus for a virtual object provided in the foregoing embodiment and the detection method embodiment for a virtual object belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 13 shows a block diagram of a terminal 1300 according to an exemplary embodiment of the present application. Fig. 13 is a block diagram illustrating a terminal 1300 according to an exemplary embodiment of the present application. The terminal 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the method of detecting a virtual object provided by method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 to implement navigation or LBS (location based Service). The positioning component 1308 may be a positioning component based on a GPS (global positioning System) of the united states, a beidou System of china, a graves System of russia, or a galileo System of the european union.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 is gradually increasing.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present application also provides a computer-readable storage medium, which is applied to a terminal, and at least one program code is stored in the computer-readable storage medium, and is used for being executed by a processor and implementing an operation performed by the terminal in the virtual object detection method in the embodiment of the present application.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for detecting a virtual object, the method comprising:
responding to the launching operation of a target interaction prop, controlling a first virtual object controlled by an end user to launch the target interaction prop in a virtual scene, wherein the target interaction prop is used for detecting a second virtual object in the virtual scene except the first virtual object;
in response to detecting that any second virtual object is included in the sensing range of the target interaction prop, determining a first position of the second virtual object;
and displaying the position identifier of the second virtual object in a map display area of the virtual scene according to the first position, wherein the map display area is used for indicating the position of the first virtual object in a virtual space corresponding to the virtual scene.
2. The method of claim 1, wherein said controlling a first virtual object controlled by an end user to transmit a target interactive prop in a virtual scene in response to a transmission operation of the target interactive prop comprises:
responding to the target interactive prop being in an activated state, displaying a launching option in a triggerable state, wherein the launching option is used for launching the target interactive prop;
and responding to the triggering operation of the transmitting option, and controlling the first virtual object to transmit the target interactive prop in a virtual scene.
3. The method of claim 1, wherein said controlling a first virtual object controlled by an end user to launch said target interactive prop in a virtual scene comprises:
acquiring a second position of the first virtual object in the virtual scene and a third position indicated by a visual angle indication identifier in the virtual scene, wherein the visual angle indication identifier is used for indicating a visual angle direction of the first virtual object;
determining a launching trajectory of the target interactive prop based on the second position and the third position;
and controlling the target interaction prop to move to a target position in the virtual scene along the launching track.
4. The method of claim 3, wherein determining a launch trajectory of the target interactive prop based on the second location and the third location comprises:
determining a first transmitting direction of the target interactive prop according to the second position and the third position;
and determining the launching track of the target interactive prop according to the first launching direction, the initial speed of the target interactive prop and the gravity direction.
5. The method of claim 3, wherein determining a launch trajectory of the target interactive prop from the second location and the third location comprises:
predicting a second emission direction of the target interactive prop according to the second position, the third position, the moving speed of the first virtual object and the moving direction of the first virtual object;
and predicting the launching track of the target interaction prop according to the second launching direction, the moving speed of the first virtual object, the moving direction of the first virtual object, the initial speed of the target interaction prop and the gravity direction.
6. The method of claim 1, wherein prior to said responding to the launching operation of the target interactive prop, the method further comprises:
displaying a prop selection interface comprising at least one interactive prop;
responding to the selection operation of any interactive prop, and taking the selected interactive prop as the target interactive prop;
and if the target interactive prop meets the activation condition, setting the target interactive prop to be in an activation state.
7. The method according to claim 6, wherein the operation of responding to the selection of any interactive prop to take the selected interactive prop as the target interactive prop comprises the following steps:
responding to the selection operation of any interactive prop, highlighting the selected interactive prop and displaying introduction information of the interactive prop on the prop selection interface;
and responding to the triggering operation of the determined option in the item selection interface, and taking the selected interactive item as the target interactive item.
8. The method according to claim 6, wherein the activation condition is any one of:
the time that the target interactive prop is destroyed at the last time exceeds the first target time length;
the time that the target interactive prop is transmitted for the last time exceeds a second target duration;
the number of the target interactive props which can be transmitted is not zero.
9. The method of claim 1, wherein said controlling a first virtual object controlled by an end user, after launching said target interactive prop in a virtual scene, further comprises:
and displaying the position identification of the target interaction prop and the induction range identification of the target interaction prop in a map display area of the virtual scene, wherein the induction range identification is a circle which takes the position identification of the target interaction prop as a circle center and the induction distance as a radius.
10. The method of claim 9, further comprising:
and if the target interactive prop meets a first display cancellation condition, canceling the display of the target interactive prop in the virtual scene, and canceling the display of the position identifier of the target interactive prop and the induction range identifier of the target interactive prop in the map display area.
11. The method according to claim 10, wherein the first cancellation display condition is any one of:
the target interaction prop is attacked by any virtual object in the virtual scene;
the time that the target interactive prop is transmitted for the last time exceeds a third target duration;
the first virtual object again launches the target interaction prop.
12. The method of claim 1, wherein after determining the first location of the second virtual object, the method further comprises:
acquiring a moving track of the second virtual object in the sensing range;
and displaying an indication mark corresponding to the movement track in the virtual scene.
13. An apparatus for detecting a virtual object, the apparatus comprising:
the control module is used for responding to the transmitting operation of the target interactive prop, controlling a first virtual object controlled by a terminal user to transmit the target interactive prop in a virtual scene, wherein the target interactive prop is used for detecting a second virtual object in the virtual scene except the first virtual object;
the detection module is used for responding to the fact that any second virtual object is included in the sensing range of the target interaction prop, and determining a first position of the second virtual object;
and a display module, configured to display, according to the first position, a position identifier of the second virtual object in a map display area, where the map display area is used to indicate a position of the first virtual object in a virtual space corresponding to the virtual scene.
14. A terminal, characterized in that it comprises a processor and a memory for storing at least one piece of program code, which is loaded by said processor and which executes the method for detecting a virtual object according to any one of claims 1 to 12.
15. A storage medium for storing at least one piece of program code for performing the method for detecting a virtual object according to any one of claims 1 to 12.
CN202010038421.9A 2020-01-14 2020-01-14 Virtual object detection method, device, terminal and storage medium Active CN111265869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038421.9A CN111265869B (en) 2020-01-14 2020-01-14 Virtual object detection method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038421.9A CN111265869B (en) 2020-01-14 2020-01-14 Virtual object detection method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111265869A true CN111265869A (en) 2020-06-12
CN111265869B CN111265869B (en) 2022-03-08

Family

ID=70991061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038421.9A Active CN111265869B (en) 2020-01-14 2020-01-14 Virtual object detection method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111265869B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111659122A (en) * 2020-07-09 2020-09-15 腾讯科技(深圳)有限公司 Virtual resource display method and device, electronic equipment and storage medium
CN111760285A (en) * 2020-08-13 2020-10-13 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and medium
CN111888764A (en) * 2020-07-31 2020-11-06 腾讯科技(深圳)有限公司 Object positioning method and device, storage medium and electronic equipment
CN112007360A (en) * 2020-08-28 2020-12-01 腾讯科技(深圳)有限公司 Processing method and device for monitoring functional prop and electronic equipment
CN112057863A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual prop and computer readable storage medium
CN112057864A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual prop and computer readable storage medium
CN112099713A (en) * 2020-09-18 2020-12-18 腾讯科技(深圳)有限公司 Virtual element display method and related device
CN112090069A (en) * 2020-09-17 2020-12-18 腾讯科技(深圳)有限公司 Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112121414A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Tracking method and device in virtual scene, electronic equipment and storage medium
CN112619134A (en) * 2020-12-22 2021-04-09 上海米哈游天命科技有限公司 Method, device and equipment for determining flight distance of transmitting target and storage medium
CN112965598A (en) * 2021-03-03 2021-06-15 北京百度网讯科技有限公司 Interaction method, device, system, electronic equipment and storage medium
CN113101647A (en) * 2021-04-14 2021-07-13 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
CN113457133A (en) * 2021-06-25 2021-10-01 网易(杭州)网络有限公司 Game display method, game display device, electronic equipment and storage medium
EP3970819A4 (en) * 2020-07-24 2022-08-03 Tencent Technology (Shenzhen) Company Limited Interface display method and apparatus, and terminal and storage medium
WO2022227915A1 (en) * 2021-04-30 2022-11-03 腾讯科技(深圳)有限公司 Method and apparatus for displaying position marks, and device and storage medium
WO2023016165A1 (en) * 2021-08-12 2023-02-16 腾讯科技(深圳)有限公司 Method and device for selecting virtual character, terminal and storage medium
WO2023226565A1 (en) * 2022-05-25 2023-11-30 腾讯科技(深圳)有限公司 Virtual character tracing method and apparatus, storage medium, device and program product
WO2024027344A1 (en) * 2022-08-05 2024-02-08 腾讯科技(深圳)有限公司 Social interaction method and apparatus, device, readable storage medium, and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130122977A1 (en) * 2009-04-20 2013-05-16 Capcom Co., Ltd. Game machine, program for realizing game machine, and method of displaying objects in game
CN108635858A (en) * 2018-05-18 2018-10-12 腾讯科技(深圳)有限公司 Interface display method, device, electronic device and computer readable storage medium
CN109107154A (en) * 2018-08-02 2019-01-01 腾讯科技(深圳)有限公司 Virtual item control method for movement, device, electronic device and storage medium
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN109939438A (en) * 2019-02-19 2019-06-28 腾讯数码(天津)有限公司 Track display method and device, storage medium and electronic device
CN110585710A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN110585731A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual article in virtual environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130122977A1 (en) * 2009-04-20 2013-05-16 Capcom Co., Ltd. Game machine, program for realizing game machine, and method of displaying objects in game
CN108635858A (en) * 2018-05-18 2018-10-12 腾讯科技(深圳)有限公司 Interface display method, device, electronic device and computer readable storage medium
CN109107154A (en) * 2018-08-02 2019-01-01 腾讯科技(深圳)有限公司 Virtual item control method for movement, device, electronic device and storage medium
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN109939438A (en) * 2019-02-19 2019-06-28 腾讯数码(天津)有限公司 Track display method and device, storage medium and electronic device
CN110585710A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN110585731A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual article in virtual environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HRL3: "使命召唤手游生存模式芯片实战操作策略(一)追踪芯片使用方法,http://www.bilibili.com/video/av71109884", 《哔哩哔哩视频》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111659122B (en) * 2020-07-09 2023-09-22 腾讯科技(深圳)有限公司 Virtual resource display method and device, electronic equipment and storage medium
CN111659122A (en) * 2020-07-09 2020-09-15 腾讯科技(深圳)有限公司 Virtual resource display method and device, electronic equipment and storage medium
WO2022007567A1 (en) * 2020-07-09 2022-01-13 腾讯科技(深圳)有限公司 Virtual resource display method and related device
JP2022544888A (en) * 2020-07-24 2022-10-24 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 Interface display method, device, terminal, storage medium and computer program
JP7387758B2 (en) 2020-07-24 2023-11-28 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 Interface display method, device, terminal, storage medium and computer program
EP3970819A4 (en) * 2020-07-24 2022-08-03 Tencent Technology (Shenzhen) Company Limited Interface display method and apparatus, and terminal and storage medium
CN111888764A (en) * 2020-07-31 2020-11-06 腾讯科技(深圳)有限公司 Object positioning method and device, storage medium and electronic equipment
CN111760285B (en) * 2020-08-13 2023-09-26 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and medium
CN111760285A (en) * 2020-08-13 2020-10-13 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and medium
CN112007360A (en) * 2020-08-28 2020-12-01 腾讯科技(深圳)有限公司 Processing method and device for monitoring functional prop and electronic equipment
CN112057864A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual prop and computer readable storage medium
CN112057863A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual prop and computer readable storage medium
CN112057864B (en) * 2020-09-11 2024-02-27 腾讯科技(深圳)有限公司 Virtual prop control method, device, equipment and computer readable storage medium
CN112090069A (en) * 2020-09-17 2020-12-18 腾讯科技(深圳)有限公司 Information prompting method and device in virtual scene, electronic equipment and storage medium
WO2022057529A1 (en) * 2020-09-17 2022-03-24 腾讯科技(深圳)有限公司 Information prompting method and apparatus in virtual scene, electronic device, and storage medium
CN112099713A (en) * 2020-09-18 2020-12-18 腾讯科技(深圳)有限公司 Virtual element display method and related device
CN112099713B (en) * 2020-09-18 2022-02-01 腾讯科技(深圳)有限公司 Virtual element display method and related device
CN112121414A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Tracking method and device in virtual scene, electronic equipment and storage medium
CN112121414B (en) * 2020-09-29 2022-04-08 腾讯科技(深圳)有限公司 Tracking method and device in virtual scene, electronic equipment and storage medium
CN112619134A (en) * 2020-12-22 2021-04-09 上海米哈游天命科技有限公司 Method, device and equipment for determining flight distance of transmitting target and storage medium
CN112619134B (en) * 2020-12-22 2023-05-02 上海米哈游天命科技有限公司 Method, device, equipment and storage medium for determining flight distance of transmission target
CN112965598A (en) * 2021-03-03 2021-06-15 北京百度网讯科技有限公司 Interaction method, device, system, electronic equipment and storage medium
CN112965598B (en) * 2021-03-03 2023-08-04 北京百度网讯科技有限公司 Interaction method, device, system, electronic equipment and storage medium
CN113101647B (en) * 2021-04-14 2023-10-24 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
CN113101647A (en) * 2021-04-14 2021-07-13 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
WO2022227915A1 (en) * 2021-04-30 2022-11-03 腾讯科技(深圳)有限公司 Method and apparatus for displaying position marks, and device and storage medium
CN113457133A (en) * 2021-06-25 2021-10-01 网易(杭州)网络有限公司 Game display method, game display device, electronic equipment and storage medium
WO2022267589A1 (en) * 2021-06-25 2022-12-29 网易(杭州)网络有限公司 Game display method and apparatus, and electronic device and storage medium
CN113457133B (en) * 2021-06-25 2024-05-10 网易(杭州)网络有限公司 Game display method, game display device, electronic equipment and storage medium
WO2023016165A1 (en) * 2021-08-12 2023-02-16 腾讯科技(深圳)有限公司 Method and device for selecting virtual character, terminal and storage medium
WO2023226565A1 (en) * 2022-05-25 2023-11-30 腾讯科技(深圳)有限公司 Virtual character tracing method and apparatus, storage medium, device and program product
WO2024027344A1 (en) * 2022-08-05 2024-02-08 腾讯科技(深圳)有限公司 Social interaction method and apparatus, device, readable storage medium, and program product

Also Published As

Publication number Publication date
CN111265869B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN111265869B (en) Virtual object detection method, device, terminal and storage medium
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN108434736B (en) Equipment display method, device, equipment and storage medium in virtual environment battle
CN110448891B (en) Method, device and storage medium for controlling virtual object to operate remote virtual prop
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
EP4011471A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN110507994B (en) Method, device, equipment and storage medium for controlling flight of virtual aircraft
CN111408133B (en) Interactive property display method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110613938B (en) Method, terminal and storage medium for controlling virtual object to use virtual prop
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN112870715B (en) Virtual item putting method, device, terminal and storage medium
CN111589150A (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112138384A (en) Using method, device, terminal and storage medium of virtual throwing prop
CN111389005A (en) Virtual object control method, device, equipment and storage medium
CN111475029A (en) Operation method, device, equipment and storage medium of virtual prop
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium
CN110898433B (en) Virtual object control method and device, electronic equipment and storage medium
CN110960849B (en) Interactive property control method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40023610

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant