CN111888764B - Object positioning method and device, storage medium and electronic equipment - Google Patents

Object positioning method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111888764B
CN111888764B CN202010761842.4A CN202010761842A CN111888764B CN 111888764 B CN111888764 B CN 111888764B CN 202010761842 A CN202010761842 A CN 202010761842A CN 111888764 B CN111888764 B CN 111888764B
Authority
CN
China
Prior art keywords
virtual object
virtual
prop
target
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010761842.4A
Other languages
Chinese (zh)
Other versions
CN111888764A (en
Inventor
杨金昊
林凌云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010761842.4A priority Critical patent/CN111888764B/en
Publication of CN111888764A publication Critical patent/CN111888764A/en
Application granted granted Critical
Publication of CN111888764B publication Critical patent/CN111888764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an object positioning method and device, a storage medium and electronic equipment. Wherein, the method comprises the following steps: under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are in different camps; under the condition that the resource amount of the virtual resources reaches a trigger condition, using the positioning induction prop to obtain the position of at least one second virtual object; and displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop. The invention solves the technical problem of low positioning efficiency of the object positioning method provided by the related technology.

Description

Object positioning method and device, storage medium and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to an object positioning method and device, a storage medium and electronic equipment.
Background
In a virtual shooting game application, it is often necessary to determine the position of each second virtual object that is a competitor with a first virtual object currently controlled by a player, so as to facilitate timely and accurate targeting of the second virtual object of an opponent to complete a shooting action.
However, in the virtual scene provided by the current virtual shooting game application, the position of the second virtual object of the competitor can only be observed according to the visual field of the current player. For example, in the case where the current player is viewing using the first-person shooting perspective, if it is detected that a second virtual object of one of the competitors appears in the upper left corner of the current display interface, it is determined that the second virtual object appears in front of the current player's current field of view. A control command input through an input device associated with the client may be obtained to implement a shooting action on the second virtual object.
That is, when the second virtual object appearing during shooting is located by visual observation, it is possible that the first virtual object has been hit by the second virtual object when the second virtual object is found by the human eye due to a problem of delayed observation reaction that exists in the human eye itself. In other words, the object positioning method provided by the related art has a problem of low positioning efficiency.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an object positioning method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of low positioning efficiency of the object positioning method provided by the related technology.
According to an aspect of an embodiment of the present invention, there is provided an object positioning method, including: under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different in formation; under the condition that the resource amount of the virtual resources reaches a trigger condition, the position of at least one second virtual object is obtained by using the positioning induction prop; and displaying the position of the second virtual object in a virtual scanning panel provided by the positioning induction prop.
According to another aspect of the embodiments of the present invention, there is also provided an object positioning apparatus, including: a first obtaining unit, configured to obtain, when a first virtual object controlled by a shooting application client is configured with a positioning sensing prop, a resource amount of a virtual resource accumulated after a shooting action of the first virtual object is executed within a target time period, where the positioning sensing prop is used to periodically scan a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different camps; a second obtaining unit, configured to obtain, by using the positioning sensing prop, a position of at least one second virtual object when a resource amount of the virtual resource reaches a trigger condition; and the positioning display unit is used for displaying the position of the second virtual object in a virtual scanning panel provided by the positioning induction prop.
According to a further aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above object positioning method when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the object positioning method through the computer program.
In the embodiment of the invention, under the condition that the first virtual object controlled by the shooting application client is configured with the positioning induction prop, the resource amount of the virtual resource accumulated after the first virtual object executes the shooting action in the target time period is obtained. And under the condition that the resource amount of the virtual resource reaches a trigger condition, the positioning sensing prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object so as to obtain the position of at least one second virtual object. And then, displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop in the shooting application client. Therefore, the second virtual object appearing near the first virtual object is periodically scanned by utilizing the positioning induction prop, and the second virtual object is timely displayed in the virtual scanning panel provided by the positioning induction prop, so that the positioning efficiency of the virtual object is improved, the first virtual object can conveniently and timely make an action strategy, the counterattack or the avoidance is completed in the first time, the success rate of the first virtual object in a shooting game task is improved, and the problem of lower positioning efficiency of the virtual object in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for an alternative object location method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative object location method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative object location method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative object location method according to an embodiment of the invention;
FIG. 5 is a flow chart of another alternative object locating method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of yet another alternative object location method according to an embodiment of the present invention;
FIG. 12 is a flow chart of yet another alternative object locating method according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of an alternative object-locating device in accordance with an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the embodiments of the present application, the following technical terms may be used, but are not limited to:
a mobile terminal: generally referred to as the handset side, including but not limited to all handheld portable game devices.
Shooting game: including, but not limited to, first person shooter games, third person shooter games, and the like, all games that use hot arms to conduct remote attacks.
A heartbeat sensor: the heart beat sensor finds the position of a certain object by sensing the electric field generated by the ultra-low frequency electric wave emitted by the human body. The device can be penetrated into a reinforced concrete wall and a steel plate to detect an ultralow frequency electric field emitted by a human body, and is a tactical prop with tactical significance.
According to an aspect of the embodiments of the present invention, there is provided an object positioning method, optionally, as an optional implementation manner, the object positioning method may be applied, but not limited, to an object positioning system in an environment as shown in fig. 1, where the object positioning system may include, but is not limited to, a terminal device 102, a network 104, and a server 106. A shooting application client is running in the terminal device 102. The terminal device 102 includes a human-machine interaction screen 1022, a processor 1024, and a memory 1026. The man-machine interaction screen 1022 is configured to present a virtual scene in a shooting game task executed by the shooting application client, provide a man-machine interaction interface to receive a man-machine interaction operation performed on the man-machine interaction interface, present a virtual scanning panel provided by the positioning prop, and display, in the virtual scanning panel, a position of a second virtual object in a fighting state with a first virtual object controlled by the shooting application client; the processor 1024 is configured to acquire the resource amount of the virtual resource accumulated after the first virtual object executes the shooting action in the target time period, and further, to acquire the position of at least one second virtual object by using the positioning sensing prop when the resource amount reaches the trigger condition. The memory 1026 is configured to store object attribute information of the first virtual object and the second virtual object, where the object attribute information includes a position of the first virtual object and a position of the second virtual object.
In addition, the server 106 includes a database 1062 and a processing engine 1064, and the database 1062 is used to store attribute information and state information of each virtual object. The processing engine 1064 is configured to obtain a location of the first virtual object and a location of the second virtual object.
The specific process comprises the following steps: assume that a virtual scene screen provided by a shooting application client is displayed in a terminal device (e.g., a mobile terminal) 102, wherein a first virtual object 12 and a second virtual object 14 are in a fighting relationship of different campuses, and a virtual scanning panel (shown as a semi-circle shape) is displayed in a positioning induction prop 16.
In steps S102 to S104, when a first virtual object controlled by a shooting application client running in the terminal device 102 is configured with a positioning sensing prop, the resource amount of a virtual resource accumulated after the first virtual object executes a shooting action in a target time period is obtained, and the resource amount of the virtual resource is sent to the server 106 through the network 104. The positioning induction prop is used for regularly scanning a second virtual object in a target scanning area associated with the first virtual object. Where the second virtual object is a different lineup than the first virtual object. Then, the service provider 106 calls the trigger condition stored in the database 1062, and determines, by the processing engine 1064, whether the resource amount of the received virtual resource reaches the trigger condition, and if the resource amount of the received virtual resource reaches the trigger condition, executes step S106, and obtains the position of at least one second virtual object by using the positioning sensing prop configured by the first virtual object. Then, in step S108, the location of the at least one second virtual object is sent to the terminal device 102 through the network 104, so that the terminal device 102 executes step S110 to display the location of the second virtual object in a virtual scanning panel provided by the positioning sensing prop.
In addition, the steps and the interaction sequence shown in fig. 1 are examples, and the operations executed by the server 106 in this embodiment may also be executed by the terminal device 102 instead, that is, the terminal device 102 may independently complete the object positioning method when the processing capability of the terminal device meets a certain condition. This is not limited in this embodiment.
It should be noted that, in this embodiment, when the first virtual object controlled by the shooting application client is configured with the positioning sensing prop, the resource amount of the virtual resource accumulated after the first virtual object executes the shooting action in the target time period is obtained. And under the condition that the resource amount of the virtual resource reaches a trigger condition, the positioning sensing prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object so as to obtain the position of at least one second virtual object. And then, displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop in the shooting application client. Therefore, the second virtual object appearing near the first virtual object is periodically scanned by utilizing the positioning induction prop, and the second virtual object is timely displayed in the virtual scanning panel provided by the positioning induction prop, so that the positioning efficiency of the virtual object is improved, the first virtual object can conveniently and timely make an action strategy, the counterattack or the avoidance is completed in the first time, the success rate of the first virtual object in a shooting game task is improved, and the problem of lower positioning efficiency of the virtual object in the related technology is solved.
Optionally, in this embodiment, the terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: mobile phones (such as Android phones, iOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, etc. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation manner, as shown in fig. 2, the object positioning method includes:
s202, under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for regularly scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are in different camps;
s204, under the condition that the resource amount of the virtual resources reaches the triggering condition, the position of at least one second virtual object is obtained by using the positioning induction prop;
s206, displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop.
Optionally, in this embodiment, the object positioning method may be applied, but not limited to, in a game application, for example, the position of each virtual object participating in one game task is visually positioned, so that a player can intuitively and accurately distinguish the position of the virtual object in a game scene. The Game application may be a Multiplayer Online Battle sports Game (MOBA) application, or a Single-Player Game (SPG) application. The types of gaming applications described above may include, but are not limited to, at least one of: two-dimensional (2D) game applications, Three-dimensional (3D) game applications, Virtual Reality (VR) game applications, Augmented Reality (AR) game applications, Mixed Reality (MR) game applications. The above is merely an example, and the present embodiment is not limited to this.
Further, the Shooting Game application may be a Third Person Shooting Game (TPS) application, which is executed from the perspective of a Third-party virtual object other than a virtual object controlled by a current player, or a First Person Shooting Game (FPS) application, which is executed from the perspective of a First virtual object controlled by a current player. Correspondingly, the second virtual object that is described above in connection with the first virtual object may be, but is not limited to: virtual objects (also referred to as Player characters), Non-Player characters (NPCs for short) controlled by players through respective game application clients, and the like. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, the positioning sensing prop may be, but is not limited to, used for periodically scanning the second virtual object in the target scanning area associated with the first virtual object. The positioning induction prop can be but not limited to a heartbeat inductor simulated in a virtual scene, and the position of the second virtual object is positioned and found by inducing an electric field generated by an ultralow frequency electric wave generated by a simulated human body and emitted by the virtual object. It should be noted that the ultralow frequency electric wave can penetrate through the reinforced concrete wall and the steel plate to detect the ultralow frequency electric field emitted by the virtual object simulating human body, and the artificial object is a tactical prop for assisting the first virtual object to win.
For example, as shown in fig. 3, a shooting interface of a first-person shooting perspective of a first virtual object controlled by a shooting application client is shown, and a second virtual object in a different formation from the first virtual object is an object 300. Here, the first virtual object is configured with a location-sensing prop 302, which provides a virtual scanning panel as shown in the figure as panel 306, wherein the location of the location-sensed object 300 will be displayed in the panel 306, such as the location of the solid point 304.
Optionally, in this embodiment, the virtual resources accumulated after the shooting action is performed may include, but are not limited to: virtual gold coins for realizing virtual transaction in a virtual scene, and virtual credits for exchange. That is, after each completion of the shooting action, the virtual object acquires a virtual resource of a resource amount corresponding to the degree of completion of the shooting action. For example, the amount of resources of the virtual resources accumulated after the shooting action is performed in the target time period may be, but is not limited to, a reward point obtained after the first virtual object performs consecutive killing. Correspondingly, the triggering condition may be, but is not limited to, whether the resource amount of the virtual resource is greater than a certain threshold, for example, when the resource amount of the virtual resource is a reward point obtained after the linking, and when the reward point is greater than the threshold, the position of the second virtual object in the sensing target scanning area may be located by using the location sensing device.
It should be noted that the target scanning area may be, but is not limited to, an area range allowing scanning detection configured for the first virtual object in advance: and determining a hemisphere space area according to a certain scanning radius by taking the position of the first virtual object as a circle center. Here, the hemispherical space region is determined from the front of the line of sight of the first virtual object without distinguishing whether the first virtual object is raised or lowered. For example, as shown in fig. 4, it is assumed that the scanning radius is 30 meters (m), and [ -30, 30] in the horizontal direction and [0, 30] in the vertical direction constitute a target scanning area associated with the first virtual object. Further, here, the first virtual objects of different character attributes or different levels may be, but are not limited to, configured with different ranges of target scanning areas. For example, the scan radius of the target scan area of the first virtual object of level 1 is 5m, and the scan radius of the target scan area of the second virtual object is 30 m. For another example, the scanning radius of the target scanning area of the first virtual object with the short range shooter attribute is 3m, and the scanning radius of the target scanning area of the first virtual object with the long range shooter attribute is 6 m.
Optionally, in this embodiment, at least one second virtual object may be displayed in the virtual scanning panel provided by the positioning sensing prop. That is, all the second virtual objects in an enemy camp different from the first virtual object may be displayed, or a part of the second virtual objects may be displayed, for example, one second virtual object may be displayed. One second virtual object here may be, but is not limited to, the second virtual object closest to the first virtual object.
In addition, in this embodiment, while the position of the second virtual object is displayed in the virtual scan panel, the distance of the second virtual object from the first virtual object may also be prompted. When the distances are different, the prompting mode of the prompting information is correspondingly adjusted, so that the first virtual object can visually and vividly distinguish the position change of the second virtual object relative to the first virtual object.
The description is made with specific reference to the following examples: it is assumed that the first virtual object and the second virtual object are parties of different camps in a one-shot game task. Under the condition that a click operation is executed on a configuration key of an application client side for equipping a sensing positioning prop (hereinafter referred to as a heartbeat sensing device) for a first virtual object, the sensing positioning prop is configured on the first virtual object, then the fact that a continuous killing reward integral of the first virtual object in a target time period reaches a threshold value is detected, and if the continuous killing reward integral reaches the threshold value, a second virtual object in a target scanning area matched with the first virtual object is scanned and detected by using the heartbeat sensing device in the action time period.
Specifically, the flowchart shown in fig. 5 may be positioned, and in step S502, the static parameters of the virtual scan panel and the scan circle parameters during scanning are initialized. For example, assuming that the static floor radius of the virtual scan panel may be radius r (as shown in panel 306 of FIG. 3), the scan circle parameter may be that the scan circle is periodic by T and is periodically scanned by 30 m. Then, in step S504, during the scanning process, real-time parameters of the scanning circle are dynamically calculated in real time, and if the sensing circle is in front of the line of sight of the first virtual object, the sensing scanning is started from 0 until the scanning radius reaches 30 m.
That is, the parameters of the scanning circle of the heartbeat sensing device are calculated at each cycle to obtain the radius of the current scanning circle and then transmitted to the Shader. In addition, when the heartbeat sensing device is in an effective action time period, the second virtual object of the enemy can be traversed by taking the position of the first virtual object as the center of a circle at intervals of T seconds, and whether the second virtual object of the enemy appears in a target scanning area of 30m or not is detected. If at least one second virtual object is detected, the position of each of the at least one second virtual object and the first virtual object in the virtual scene needs to be converted into a display position on the map of the virtual scanning panel. Wherein, the horizontal direction scanning range [ -30, 30] in the target scanning area is mapped to the horizontal direction mapping coordinate range [0, 1], and the vertical direction scanning range [0, 30] is mapped to the vertical direction mapping coordinate range [0, 1 ]. Here, after the position of the second virtual object is obtained, the coordinate transformation can be implemented by the following function, and the second virtual object is mapped into the virtual scan panel for display:
drawCall.dynamicMaterial.SetVector(paraID,paraValue);
wherein, the paraID can respectively enter the scanning radius and the identifier of the second virtual object, and the paraValue respectively represents the scanning radius and the position of the second virtual object.
As in steps S506 to S514: determining whether a second virtual object is detected, and determining the position of the second virtual object when the second virtual object is detected. And setting a Shader parameter according to the position. Then detect if the usage cut-off time of the location sensitive prop (i.e. heartbeat sensing device)? If the use ending time is determined to be reached, the positioning sensing prop (namely the heartbeat sensing device) is retracted.
The steps and the sequence shown in fig. 5 are examples, and this embodiment is not limited in this respect.
By the embodiment provided by the application, under the condition that the first virtual object controlled by the shooting application client is configured with the positioning induction prop, the resource amount of the virtual resource accumulated after the first virtual object executes the shooting action in the target time period is obtained. And under the condition that the resource amount of the virtual resource reaches a trigger condition, the positioning sensing prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object so as to obtain the position of at least one second virtual object. And then, displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop in the shooting application client. Therefore, the second virtual object appearing near the first virtual object is periodically scanned by utilizing the positioning induction prop, and the second virtual object is timely displayed in the virtual scanning panel provided by the positioning induction prop, so that the positioning efficiency of the virtual object is improved, the first virtual object can conveniently and timely make an action strategy, the counterattack or the avoidance is completed in the first time, the success rate of the first virtual object in a shooting game task is improved, and the problem of lower positioning efficiency of the virtual object in the related technology is solved.
As an optional scheme, the obtaining, by using the positioning sensing prop, a position of the at least one second virtual object includes: scanning a second virtual object appearing in the target scanning area according to a target period to acquire the position of at least one second virtual object; displaying the position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop comprises: and updating the position of the second virtual object displayed in the virtual scanning panel according to the target period.
Optionally, in this embodiment, the positioning sensing prop is a tactical prop that periodically scans and searches for the second virtual object. The rendering effect can be, but is not limited to, displayed by assigning a dynamically changeable shader parameter to the panel (mesh). The positioning sensing device herein may include, but is not limited to, the following: a bottom static panel, a dynamic scan circle, a position icon of a position where the second virtual object is located. For example, as shown in fig. 6, the bottom static panel 602 is a bottom plate of the virtual scan panel, and the static parameters of the bottom static panel 602 are used to indicate information such as a display area and a display area of the virtual scan panel in the virtual scene. The dynamic scan circle 604 is used to indicate the position of the current scan circle (gray circle) during the scan. It should be noted that, here, the scanning process of the virtual scanning panel is a periodic dynamic scanning, and the dynamic scanning circle 604 will implement a sensing process from 0 to the end of the scanning radius R in each scanning period. The location icon 606 of where the second virtual object is located may be represented by a solid circle. Here, the display mode of the position icon is not limited to this, and various figures such as a hollow circle, a triangle, and the like, and combinations thereof may be used. Further, when the position icon at which the second virtual object is located is a triangle, the orientation of the triangle may also indicate the current orientation of the second virtual object. Here, this is an example, and this is not limited in this embodiment.
For example, assuming that a first virtual object is currently using a positioning sensing prop, its virtual scanning panel scans every 3 seconds, and each time scanning, the position of a second virtual object in the scanned area and the distance between the second virtual object and the first virtual object are displayed on the screen. In this example, the distance may be accurate to 0.1 m.
Through the embodiment provided by the application, the positioning induction prop is used for scanning and detecting the second virtual object of the enemy positioned in the target scanning area associated with the first virtual object according to the target period in the virtual scene provided by the shooting application client, so that the second virtual object close to the first virtual object can be found in time, the first virtual object can be visually prompted to take a fighting action on the second virtual object, and the first virtual object can be prevented from being killed.
As an alternative, updating the position of the second virtual object displayed in the virtual scan panel according to the target period includes:
s1, acquiring the position of the first virtual object and the position of the second virtual object in the current target period;
s2, updating the position of the second virtual object displayed on the virtual scan panel in the next target period after the target period when the position of the first virtual object is changed;
s3, when the position of the second virtual object is changed, the position of the second virtual object displayed on the virtual scan panel is updated in a next target period after the target period.
In this embodiment, when it is detected that the position of the first virtual object is changed or the position of the second virtual object is changed in the current target period, the changed position is updated and displayed in the next target period after the current target period. That is, the position of the second virtual object does not change in real time, but the display is updated periodically according to the scanning period. That is, in the case where it is determined that the second virtual object has changed its position, the position is not updated in the current cycle, but is updated in the next cycle (i.e., the next scan). And under the condition that the position of the first virtual object is determined to be changed, the static parameters of the bottom static panel of the virtual scanning panel need to be updated, the virtual scanning panel displayed in the shooting application client side is reinitialized, and the position of the corresponding second virtual object is updated at the same time.
The description is made with reference to the example shown in fig. 7: assume that the position of the second virtual object 606 within the current target period T appears to be located at the outer edge of the target scanning area as shown in fig. 7 (a). Here the second virtual object 606 is moving, but its position in the virtual scan panel is maintained until the next target period T + 1. In the next target period T +1, it is detected that the position of the second virtual object 606 has changed, and has moved to the middle of the target scanning area as shown in fig. 7 (b).
Through the embodiment provided by the application, the position of the second virtual object displayed in the virtual scanning panel is updated according to the target period, so that the problem that the display screen is blocked and accurate positioning cannot be realized due to the fact that the moving speed of the second virtual object is too high is solved.
As an alternative, updating the position of the second virtual object displayed in the virtual scan panel according to the target period includes:
s1, acquiring the orientation of the first virtual object in real time in the current target period;
s2, when the orientation of the first virtual object changes, the position of the second virtual object displayed on the virtual scan panel is kept unchanged.
The description is made with reference to the example shown in fig. 8: it is assumed that the position of the second virtual object 606 in the current target period T is displayed at the outer edge on the right side of the target scanning area as shown in fig. 8 (a). In addition, if the orientation of the first virtual object changes within the target period T, such as turning to the right, although the relative relationship between the position of the second virtual object and the position of the first virtual object does not change, the orientation of the positioning sensing device changes, and the corresponding sensing model also changes, so that the position of the second virtual object 606 changes in the virtual scan panel, and the position is displayed at a display position close to the central axis as shown in fig. 8 (b).
By the embodiment provided by the application, when the orientation of the first virtual object changes, the position of the second virtual object scanned and displayed in the virtual scanning panel changes, so that the position of the second virtual object in the line-of-sight direction relative to the first virtual object is really located.
As an alternative, updating the position of the second virtual object displayed in the virtual scan panel according to the target period includes:
and S1, adjusting the brightness of the position icon at the position where the second virtual object is located in the target period, wherein the brightness is positively correlated with the display duration of the position icon.
Optionally, in order to avoid that the scanned position of the current target period causes display interference with the scanned position of the next target period, in this embodiment, the longer the display duration of the position icon is, the darker the corresponding brightness of the position icon is, and until the end of the target period, the position icon will disappear from the virtual scanning panel.
The description is made with reference to the example shown in fig. 9: assume that at the initial time T0 of the current target period T, the position of the second virtual object 606 is displayed at the right outer edge of the target scanning area as shown in fig. 9(a), and the brightness value of its position icon is P1, which may be represented by a black solid dot in the figure. As the display duration of the target period T progresses, assuming that the position of the second virtual object does not change, the brightness of the corresponding position icon is darkened. Assuming that the brightness value of the position icon is P2 at time t1 of the target period, the graph may be filled with dots as shown in FIG. 9 (b). Wherein, the luminance P1> luminance P2.
Through the embodiment provided by the application, the display duration change of the target period is prompted through different brightness of the position icon, and meanwhile, the display interference on the position scanned in the next target period is avoided.
As an optional scheme, when the position of the second virtual object displayed in the virtual scanning panel is updated according to the target period, the method further includes:
s1, acquiring the distance between the second virtual object and the first virtual object;
s2, a presentation information matching the distance is presented in the virtual scan panel.
Optionally, in this embodiment, prompting, in the virtual scan panel, the prompt information matching the distance includes:
s21, displaying the prompt information according to the first prompt mode under the condition that the distance is less than the first distance threshold;
s22, when the distance is larger than or equal to the first distance threshold, displaying the prompting information according to a second prompting mode;
wherein the first prompting mode is more prominent than the second prompting mode.
Specifically, with reference to fig. 10, it is assumed that the position of the second virtual object 606 in the current target period T is displayed as shown in fig. 10, and the position is displayed at a display position close to the central axis. Assuming that the calculated distance between the second virtual object and the first virtual object is 3m, the prompt information corresponding to the distance can be directly prompted in the virtual scanning panel.
In addition, prompting can be performed according to different prompting modes according to the comparison result of the distance and the threshold value. For example, when the distance is not less than 5m, the distance is displayed as white characters, and when the distance is less than 5m, the distance is displayed as red characters. Alternatively, when the distance is not less than 5m, the distance is displayed as thin line characters, and when the distance is less than 5m, the distance is displayed as thick line characters. That is, the manner of prompting is more prominent and intuitive as the second virtual object is closer to the first virtual object.
Through the embodiment provided by the application, the distance between the second virtual object and the first virtual object is visually prompted in the virtual scanning panel, and the prompt can be performed according to different prompt modes based on different distances, so that the first virtual object can timely make response actions such as counterattack or avoidance according to prompt information, the shooting is avoided, and the purpose of improving the winning rate of the first virtual object is achieved.
As an optional solution, when the position of the second virtual object is displayed in the virtual scanning panel provided by the positioning sensing prop, the method further includes:
and S1, under the condition that the first virtual object is determined to be hit by the target shooting prop of the second virtual object, in the action time period of the target shooting prop, adjusting the first picture displayed in the virtual scanning panel into a second picture, wherein the position of the second virtual object is displayed in the first picture, and the position of the second virtual object in the second picture is subjected to blurring processing or is not displayed.
Optionally, in this embodiment, in the action time period of the target shooting prop, adjusting the first screen displayed in the virtual scanning panel to the second screen includes:
in the action time period of the target shooting prop, the virtual scanning panel is adjusted and displayed to be one of the following picture contents: the system comprises a black screen, a white screen and a mosaic, wherein when the action time period of the target shooting prop reaches the end time, the position of at least one second virtual object is obtained again, and the position of the second virtual object is displayed.
Optionally, in this embodiment, the target shooting prop may be, but not limited to, a shooting prop with different shooting distances or different shooting effects, and different image contents are displayed based on different shooting props.
For example, the above-described target shooting prop may include the following examples:
1) when the first virtual object uses the positioning sensing prop (heartbeat sensing device), the shock bullet sent by the second virtual object is neutralized, and the virtual scanning panel provided by the positioning sensing prop (heartbeat sensing device) appears a black screen during the duration of the shock bullet. And starting the positioning induction tool (heartbeat induction device) to perform induction scanning again after the impact action of the impact bomb reaches the end moment.
2) When the first virtual object uses a positioning sensing prop (heartbeat sensing device), and Electromagnetic pulses (EMP for short) emitted by the second virtual object are detected, during the action duration of the EMP, a mosaic appears on the virtual scanning panel provided by the positioning sensing prop (heartbeat sensing device), as shown in fig. 11, the oblique line fills the area. And starting the positioning induction tool (heartbeat induction device) to perform induction scanning again after the EMP reaches the end time.
3) When first virtual object used location response stage property (heartbeat induction device), the flash bomb that the virtual object of having mediad sent, because the inherent effect of flash bomb, white screen will appear in Head Up Display (HUD for short) that the virtual scanning panel that above-mentioned location response stage property (heartbeat induction device) provided corresponds, but location response stage property (heartbeat induction device) still can continue to scan, just can not continue the Display position because HUD is white.
4) In the case that the second virtual object is equipped with a specific item or a specific skill, the sensing scan of the positioning sensing item (heartbeat sensing device) may be avoided, so that the position of the second virtual object cannot be displayed in the virtual scan panel provided by the positioning sensing item (heartbeat sensing device).
Through the embodiment that this application provided, under the condition that the target shooting props of the virtual object hit by the second while using location response props, the content that the virtual scanning panel that above-mentioned location response props provided shows will also receive corresponding interference influence, appears black screen, white screen, mosaic, the content that does not show even to the realization receives the true simulation of attacking when using location response props in the virtual scene, reaches the purpose that improves the simulation effect.
As an optional solution, when the position of the second virtual object is displayed in the virtual scanning panel provided by the positioning sensing prop, the method further includes:
and S1, under the condition that a retraction instruction for indicating retraction of the positioning induction prop is obtained, the positioning induction prop is retracted into a prop storage space corresponding to the first virtual object, and the virtual scanning panel is hidden, wherein the positioning induction prop in the prop storage space is in a suspended use state.
Optionally, in this embodiment, before the receiving the location sensing prop into the prop storage space corresponding to the first virtual object, the method further includes:
1) under the condition that the use stop time of the positioning induction prop is reached, triggering and generating a retraction instruction;
2) under the condition that the operation executed on a retraction operation area in the shooting application client is obtained, a retraction instruction is triggered and generated;
3) and under the condition of acquiring the operation executed on the shooting operation area in the shooting application client, triggering and generating a shooting instruction for controlling the first virtual object to execute the shooting action, and triggering a retraction instruction at the same time.
It should be noted that, in this embodiment, the positioning sensing prop is provided with a use duration, and when the use ending time is reached, the positioning sensing prop can be triggered to generate a retraction instruction, and the positioning sensing prop is received into a virtual backpack held by the first virtual object in response to the retraction instruction, so as to be used continuously next time.
Specifically, steps S1202 to S1212 shown in fig. 12 may be combined: in the process of running the shooting game task, it is assumed that in step S1202, the first virtual object clicks and selects the item icon corresponding to the location-sensitive item. Then in step S1204, detect whether the location-sensitive prop is available? If it is determined that the object is in the available state, step S1206 is executed to detect a second virtual object in the target scanning area associated with the first virtual object by using the positioning sensing prop. If the second virtual object is detected, in step S1208, a virtual scanning panel provided by the positioning sensing prop is presented at the shooting application client. Here the virtual scan panel will be used to display the location of the second virtual object. Further, in step S1210 and step S1212, it is detected whether the use deadline of the positioning sensing prop is reached? When the arrival is determined, a retraction instruction is triggered, and the positioning sensing prop is retracted.
In addition, in this embodiment, a retraction operation area for retracting the positioning sensing prop is further provided at the shooting application client, and the corresponding retraction operation may include, but is not limited to: clicking and sliding. For example, when a stowing key is disposed in the stowing operation area, when a click operation performed on the stowing key is detected, a stowing instruction may be generated by triggering, and in response to the stowing instruction, the positioning sensing prop is received in a virtual backpack held by the first virtual object, so as to be used continuously next time. If the retraction operation area is provided with a slide area, the retraction command may be triggered to be generated when the slide operation is detected to be performed on the trajectory indicated by the retraction area.
Furthermore, when detecting that the shooting operation area set by the shooting application client is operated, the shooting operation area can be triggered to generate a retraction instruction, and the positioning induction prop currently held by the first virtual object can be directly stored in a virtual backpack held by the first virtual object. The shooting operation may include, but is not limited to: firing operation, aiming action, shooting prop switching operation and the like. Therefore, the state of the handheld positioning induction prop can be directly adjusted to be the holding state corresponding to the shooting prop.
Through the embodiment that this application provided, can pack up the location response stage property that currently holds through the operation mode of difference to make first virtual object resume the shooting state as early as possible, save stage property and switch for a long time, improve stage property and switch efficiency, richened stage property switching mode simultaneously.
As an alternative, displaying the position of the second virtual object in the virtual scanning panel provided by the positioning sensing prop includes:
1) under the condition that at least two second virtual objects are included in the target scanning area, displaying the positions of the at least two second virtual objects in the virtual scanning panel; alternatively, the first and second electrodes may be,
2) and under the condition that at least two second virtual objects are included in the target scanning area, acquiring a second virtual object which is closest to the first virtual object in the at least two second virtual objects, and displaying the position of the second virtual object which is closest to the first virtual object in the virtual scanning panel.
Through the embodiment provided by the application, under the condition that the display space of the virtual scanning panel allows, the positions of all second virtual objects of enemies can be displayed, and therefore the purpose of comprehensively monitoring the positions of the second virtual objects is achieved. In addition, a second virtual object closest to the second virtual object can be displayed, so that the purpose of saving the display space of the virtual scanning panel is achieved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an object positioning apparatus for implementing the above object positioning method. As shown in fig. 13, the apparatus includes:
1) a first obtaining unit 1302, configured to obtain, when a first virtual object controlled by a shooting application client is configured with a positioning sensing prop, a resource amount of a virtual resource accumulated after the first virtual object performs a shooting action in a target time period, where the positioning sensing prop is used to periodically scan a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different camps;
2) a second obtaining unit 1304, configured to obtain, by using the positioning sensing prop, a position of at least one second virtual object when the resource amount of the virtual resource reaches the trigger condition;
3) and a positioning display unit 1306, configured to display a position of the second virtual object in a virtual scanning panel provided by the positioning sensing prop.
Here, for the specific embodiment of the object positioning apparatus, reference may be made to the above method embodiment, which is not described herein again.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the object positioning method, where the electronic device may be the terminal device or the server shown in fig. 1. The present embodiment takes the electronic device as an example for explanation. As shown in fig. 14, the electronic device comprises a memory 1402 and a processor 1404, the memory 1402 having stored therein a computer program, the processor 1404 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for regularly scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different in formation;
s2, under the condition that the resource amount of the virtual resource reaches the triggering condition, the position of at least one second virtual object is obtained by using the positioning induction prop;
and S3, displaying the position of the second virtual object in the virtual scanning panel provided by the positioning sensing prop.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 14 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 14, or have a different configuration than shown in FIG. 14.
The memory 1402 may be used to store software programs and modules, such as program instructions/modules corresponding to the object location method and apparatus in the embodiments of the present invention, and the processor 1404 executes various functional applications and data processing by running the software programs and modules stored in the memory 1402, so as to implement the object location method. Memory 1402 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1402 may further include memory located remotely from the processor 1404, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1402 may be specifically, but not limited to, used to store resource information of corresponding virtual resources of the first virtual object and the second virtual object, and object attribute information, and other information. As an example, as shown in fig. 14, the memory 1402 may include, but is not limited to, a first obtaining unit 1302, a second obtaining unit 1304, and a positioning display unit 1306 in the object positioning apparatus. In addition, other module units in the object positioning apparatus may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmitting device 1406 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1406 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1406 is a Radio Frequency (RF) module, which is used to communicate with the internet by wireless means.
In addition, the electronic device further includes: a display 1408 for displaying the virtual scan panel provided by the position sensing device; and a connection bus 1410 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the object localization method. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for regularly scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different in formation;
s2, under the condition that the resource amount of the virtual resource reaches the triggering condition, the position of at least one second virtual object is obtained by using the positioning induction prop;
and S3, displaying the position of the second virtual object in the virtual scanning panel provided by the positioning sensing prop.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. An object positioning method, comprising:
under the condition that a first virtual object controlled by a shooting application client is configured with a positioning induction prop, acquiring the resource amount of virtual resources accumulated after the first virtual object executes a shooting action in a target time period, wherein the positioning induction prop is used for periodically scanning a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different in formation;
under the condition that the resource amount of the virtual resources reaches a trigger condition, scanning the second virtual objects appearing in the target scanning area according to a target period to obtain the position of at least one second virtual object;
updating the position of the second virtual object displayed in a virtual scanning panel provided by the positioning induction prop according to the target period;
adjusting the brightness of a position icon at the position of the second virtual object in the target period, wherein the brightness is positively correlated with the display duration of the position icon;
and under the condition that the first virtual object is determined to be hit by the target shooting prop of the second virtual object, adjusting a first picture displayed in the virtual scanning panel to a second picture within the action time period of the target shooting prop, wherein the position of the second virtual object is displayed in the first picture, and the position of the second virtual object is blurred or not displayed in the second picture.
2. The method of claim 1, wherein the updating the position of the second virtual object displayed in the virtual scan panel according to the target period comprises:
acquiring the position of the first virtual object and the position of the second virtual object in the current target period;
when the position of the first virtual object is changed, updating the position of the second virtual object displayed in the virtual scanning panel in the next target period after the target period;
and updating the position of the second virtual object displayed in the virtual scanning panel in the next target period after the target period when the position of the second virtual object is changed.
3. The method of claim 1, wherein the updating the position of the second virtual object displayed in the virtual scan panel according to the target period comprises:
acquiring the orientation of the first virtual object in real time in the current target period;
when the orientation of the first virtual object is changed, the position of the second virtual object displayed on the virtual scan panel is kept unchanged.
4. The method according to claim 1, wherein when the position of the second virtual object displayed in the virtual scan panel is updated according to the target period, the method further comprises:
acquiring the distance between the second virtual object and the first virtual object;
and prompting the prompting information matched with the distance in the virtual scanning panel.
5. The method of claim 4, wherein prompting the virtual scan panel for prompt information matching the distance comprises:
under the condition that the distance is smaller than a first distance threshold value, displaying the prompt information according to a first prompt mode;
displaying the prompt information according to a second prompt mode under the condition that the distance is greater than or equal to the first distance threshold;
wherein the first prompting mode is more prominent than the second prompting mode.
6. The method of claim 1, wherein the adjusting the first frame displayed in the virtual scan panel to a second frame during the period of action of the target shooting prop comprises:
and in the action time period of the target shooting prop, the virtual scanning panel is adjusted and displayed to be one of the following picture contents: the system comprises a black screen, a white screen and a mosaic, wherein when the action time period of the target shooting prop reaches the end time, the position of at least one second virtual object is obtained again, and the position of the second virtual object is displayed.
7. The method of claim 1, wherein displaying the location of the second virtual object in a virtual scan panel provided by the positioning sensing prop further comprises:
and under the condition that a retraction instruction for indicating retraction of the positioning induction prop is acquired, retracting the positioning induction prop into a prop storage space corresponding to the first virtual object, and hiding the virtual scanning panel, wherein the positioning induction prop in the prop storage space is in a suspended use state.
8. The method of claim 7, wherein prior to said retrieving said position-sensitive prop into said prop storage space corresponding to said first virtual object, further comprising:
triggering and generating the retraction instruction when the use ending time of the positioning induction prop is reached;
under the condition that the operation executed on a retraction operation area in the shooting application client is obtained, triggering and generating the retraction instruction;
and under the condition of obtaining the operation executed on the shooting operation area in the shooting application client, triggering and generating a shooting instruction for controlling the first virtual object to execute the shooting action, and triggering the retraction instruction at the same time.
9. The method of any one of claims 1 to 8, wherein displaying the location of the second virtual object in a virtual scan panel provided by the positioning sensing prop comprises:
if at least two second virtual objects are included in the target scanning area, displaying the positions of the at least two second virtual objects in the virtual scanning panel; alternatively, the first and second electrodes may be,
and under the condition that at least two second virtual objects are included in the target scanning area, acquiring a second virtual object which is closest to the first virtual object in the at least two second virtual objects, and displaying the position of the second virtual object which is closest to the first virtual object in the virtual scanning panel.
10. An object positioning device, comprising:
a first obtaining unit, configured to obtain, when a first virtual object controlled by a shooting application client is configured with a positioning sensing prop, a resource amount of a virtual resource accumulated after a shooting action is performed by the first virtual object within a target time period, where the positioning sensing prop is used to periodically scan a second virtual object in a target scanning area associated with the first virtual object, and the second virtual object and the first virtual object are different camps;
a second obtaining unit, configured to scan, according to a target period, the second virtual object appearing in the target scanning area when a resource amount of the virtual resource reaches a trigger condition, so as to obtain a position where at least one second virtual object is located;
the positioning display unit is used for updating the position of the second virtual object displayed in the virtual scanning panel provided by the positioning induction prop according to the target period; adjusting the brightness of a position icon at the position of the second virtual object in the target period, wherein the brightness is positively correlated with the display duration of the position icon; and under the condition that the first virtual object is determined to be hit by the target shooting prop of the second virtual object, adjusting a first picture displayed in the virtual scanning panel to a second picture within the action time period of the target shooting prop, wherein the position of the second virtual object is displayed in the first picture, and the position of the second virtual object is blurred or not displayed in the second picture.
11. A computer-readable storage medium comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 9.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 9 by means of the computer program.
CN202010761842.4A 2020-07-31 2020-07-31 Object positioning method and device, storage medium and electronic equipment Active CN111888764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010761842.4A CN111888764B (en) 2020-07-31 2020-07-31 Object positioning method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010761842.4A CN111888764B (en) 2020-07-31 2020-07-31 Object positioning method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111888764A CN111888764A (en) 2020-11-06
CN111888764B true CN111888764B (en) 2022-02-22

Family

ID=73183044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010761842.4A Active CN111888764B (en) 2020-07-31 2020-07-31 Object positioning method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111888764B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5548177B2 (en) * 2011-09-28 2014-07-16 株式会社コナミデジタルエンタテインメント Game device and program
CN110448907B (en) * 2019-08-16 2020-12-01 腾讯科技(深圳)有限公司 Method and device for displaying virtual elements in virtual environment and readable storage medium
CN110433493B (en) * 2019-08-16 2023-05-30 腾讯科技(深圳)有限公司 Virtual object position marking method, device, terminal and storage medium
CN110882538B (en) * 2019-11-28 2021-09-07 腾讯科技(深圳)有限公司 Virtual living character display method, device, storage medium and computer equipment
CN111111191B (en) * 2019-12-26 2021-11-19 腾讯科技(深圳)有限公司 Virtual skill activation method and device, storage medium and electronic device
CN111265869B (en) * 2020-01-14 2022-03-08 腾讯科技(深圳)有限公司 Virtual object detection method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN111888764A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
US11504620B2 (en) Method for controlling game character and electronic device and computer storage medium
US11975262B2 (en) Information processing method and apparatus, electronic device, and storage medium
US10500484B2 (en) Information processing method and apparatus, storage medium, and electronic device
CN113457150B (en) Information prompting method and device, storage medium and electronic equipment
US20240100416A1 (en) Method for processing information and terminal device and non-transitory computer-readable storage medium
CN111228802B (en) Information prompting method and device, storage medium and electronic device
US20210086076A1 (en) Image processing method and apparatus
CN112107858B (en) Prop control method and device, storage medium and electronic equipment
CN112107857B (en) Control method and device of virtual prop, storage medium and electronic equipment
CN111389005B (en) Virtual object control method, device, equipment and storage medium
US11628365B2 (en) Information processing system, storage medium, information processing apparatus and information processing method
CN110898430B (en) Sound source positioning method and device, storage medium and electronic device
CN111249726B (en) Operation method, device, equipment and readable medium of virtual prop in virtual environment
CN113457133B (en) Game display method, game display device, electronic equipment and storage medium
CN111389000A (en) Using method, device, equipment and medium of virtual prop
CN111330278A (en) Animation playing method, device, equipment and medium based on virtual environment
CN111265861A (en) Display method and device of virtual prop, storage medium and electronic device
CN111167124A (en) Virtual prop obtaining method and device, storage medium and electronic device
CN112121428B (en) Control method and device for virtual character object and storage medium
CN111888764B (en) Object positioning method and device, storage medium and electronic equipment
CN112107856A (en) Hit feedback method and device, storage medium and electronic equipment
CN116920374A (en) Virtual object display method and device, storage medium and electronic equipment
CN111589113B (en) Virtual mark display method, device, equipment and storage medium
CN111359214B (en) Virtual item control method and device, storage medium and electronic device
CN114011069A (en) Control method of virtual object, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant