CN111589113B - Virtual mark display method, device, equipment and storage medium - Google Patents

Virtual mark display method, device, equipment and storage medium Download PDF

Info

Publication number
CN111589113B
CN111589113B CN202010352056.9A CN202010352056A CN111589113B CN 111589113 B CN111589113 B CN 111589113B CN 202010352056 A CN202010352056 A CN 202010352056A CN 111589113 B CN111589113 B CN 111589113B
Authority
CN
China
Prior art keywords
virtual
virtual object
controlled
target
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010352056.9A
Other languages
Chinese (zh)
Other versions
CN111589113A (en
Inventor
焦雍容
徐戊元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010352056.9A priority Critical patent/CN111589113B/en
Publication of CN111589113A publication Critical patent/CN111589113A/en
Application granted granted Critical
Publication of CN111589113B publication Critical patent/CN111589113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing

Abstract

The application discloses a display method, a display device, display equipment and a storage medium of a virtual mark, and belongs to the technical field of computers. The method comprises the following steps: responding to the antagonistic action between a controlled virtual object and a first virtual object in a virtual scene, and determining the residual capacity value of the first virtual object, wherein the controlled virtual object is a virtual object controlled by an end user; responding to the residual capacity value meeting the first target condition, and acquiring a virtual mark of the controlled virtual object, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object; on the target position of the virtual scene, a virtual marker is displayed. By the method provided by the embodiment of the application, after a user completes a certain virtual event, the virtual mark can be set near the first virtual object. Other users can observe the virtual mark, so that the users can know that the virtual event is finished, and the users who finish the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved.

Description

Virtual mark display method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying a virtual mark.
Background
With the development of multimedia technology and the diversification of terminal functions, more and more games can be played on the terminal. Among them, a Multiplayer Online Battle Arena (MOBA) is a more popular game, and a terminal can display a virtual scene in an interface and a virtual object in the virtual scene. During the game, the user can control the virtual object to match with other users or virtual objects controlled by the server.
At present, in an MOBA game, after a virtual object controlled by a user completes a certain virtual event, a server generally controls terminals of all users to display related information on the top of a game interface, or controls the terminals to play voice to notify all users.
However, the above simple notification method may not allow other users to timely know that a certain virtual event occurs, and the user may also want to execute the virtual event, which causes operation confusion and human-computer interaction efficiency is not high.
Disclosure of Invention
The embodiment of the application provides a display method, a display device, display equipment and a storage medium of a virtual mark, and the efficiency of human-computer interaction can be achieved. The technical scheme is as follows:
in one aspect, a method for displaying a virtual mark is provided, the method including:
determining a residual capacity value of a first virtual object in response to a confrontation between a controlled virtual object in a virtual scene and the first virtual object, wherein the controlled virtual object is a virtual object controlled by an end user;
in response to the residual capacity value meeting a first target condition, acquiring a virtual mark of the controlled virtual object, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object;
and displaying the virtual mark on the target position of the virtual scene.
In one aspect, there is provided a display apparatus of a virtual mark, the apparatus including:
a residual capacity value determining unit, configured to determine a residual capacity value of a first virtual object in a virtual scene in response to a countermeasure action occurring between a controlled virtual object and the first virtual object, where the controlled virtual object is a virtual object controlled by an end user;
an obtaining unit, configured to obtain a virtual tag of the controlled virtual object in response to that the remaining capability value meets a first target condition, where the virtual tag is used to indicate that the first virtual object is defeated by the controlled virtual object;
and the display unit is used for displaying the virtual mark on the target position of the virtual scene.
In a possible embodiment, the apparatus further comprises:
and the real-time detection unit is used for detecting whether the controlled virtual object generates antagonistic behavior with other virtual objects in real time, and responding to the fact that the controlled virtual object does not generate antagonistic behavior with other virtual objects within the target time length, and executing the step of displaying the virtual mark on the target position of the virtual scene.
In a possible embodiment, the apparatus further comprises:
and the target position determining unit is used for determining the target position according to the distance between the controlled virtual object and the first virtual object.
In a possible implementation manner, the root target position determining unit is configured to determine a first region, a center of the first region is a position of the first virtual object in the virtual scene, and a radius of the first region is a first target radius. And in response to the fact that the distance is smaller than the first target radius, determining a ray passing through the position of the first virtual object by taking the position of the controlled virtual object as an end point. And determining the intersection point of the ray and the first region boundary line as the target position.
In a possible implementation, the target position determining unit is further configured to determine a first area, where a center of the first area is a position of the first virtual object in the virtual scene, and a radius of the first area is a first target radius. And determining a straight line passing through the position of the controlled virtual object and the position of the first virtual object in response to the fact that the distance is larger than the first target radius and smaller than the second target radius. The second target radius is greater than the first target radius. An intersection of the straight line and the first region boundary line is determined as the target position.
In a possible embodiment, the apparatus further comprises:
a target flying speed determining unit, configured to determine a target flying speed of the virtual marker according to a distance between the controlled virtual object and the first virtual object, where the target flying speed is positively correlated with the distance between the controlled virtual object and the first virtual object.
The display unit is further configured to display, on the virtual scene, a flying process of the virtual marker flying from the controlled virtual object to the target position at the target flying speed.
In one possible embodiment, the display unit is further configured to not display the virtual mark in response to the remaining ability value satisfying a second target condition.
In a possible implementation manner, the target position determining unit is further configured to obtain a target position of the virtual marker displayed in the virtual scene according to the identifier of the first virtual object.
In a possible embodiment, the apparatus further comprises:
and the display orientation determining unit is used for acquiring the display orientation of the virtual mark displayed in the virtual scene according to the identifier of the first virtual object.
The display unit is used for displaying the virtual mark on the target position of the virtual scene according to the display orientation.
In a possible embodiment, the apparatus further comprises:
and the virtual grade determining unit is used for determining the virtual grade of the first virtual object, and responding to the fact that the virtual grade meets a target grade condition, and executing the step of displaying the virtual mark on the target position of the virtual scene.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one instruction stored therein, the instruction being loaded and executed by the one or more processors to implement operations performed by the display method of the virtual mark.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operation performed by the display method of the virtual mark.
By the method provided by the embodiment of the application, after a user completes a certain virtual event, for example, a certain first virtual object is defeated or destroyed, the virtual marker can be set at a target position near the defeated or destroyed first virtual object. When the virtual object controlled by other users is close to the target position, the virtual mark can be observed, so that the completion of the virtual event is known, and the users who complete the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a display method of a virtual mark according to an embodiment of the present application;
FIG. 2 is an interaction diagram of a display method of a virtual mark according to an embodiment of the present disclosure;
FIG. 3 is a logic diagram of a method for displaying a virtual tag according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a target position determining method provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a target position determining method provided in an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a display effect of a virtual mark provided in an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a display effect of a virtual mark provided in an embodiment of the present application;
FIG. 8 is an interaction diagram of a display method of a virtual mark according to an embodiment of the present application;
FIG. 9 is a logic diagram for displaying a virtual tag provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an effect of setting a target position according to an embodiment of the present application;
fig. 11 is a schematic view of a virtual scene provided in an embodiment of the present application;
fig. 12 is a schematic view of a virtual scene provided in an embodiment of the present application;
fig. 13 is a schematic view of a virtual scene provided in an embodiment of the present application;
fig. 14 is a schematic view of a virtual scene provided in an embodiment of the present application;
fig. 15 is a flowchart of a display method of a virtual mark according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a virtual mark display device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Hereinafter, terms related to the present application are explained.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene battle by training, a Non-user Character (NPC) set in the virtual scene, or a building set in the virtual scene. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking an MOBA game as an example, a user can control a virtual object to move in the virtual scene, and can also control the virtual object to release virtual skills, attack the virtual object of enemy camp, and reduce the capacity value of the virtual object of enemy camp. Correspondingly, the virtual object of the enemy battle can also attack the virtual object controlled by the user through the virtual skill, so that the capability value of the virtual object controlled by the user is reduced. The above scenarios are merely illustrative, and the embodiments of the present application are not limited to this.
Virtual marking: the mark is used for displaying personal characteristics of a user, for example, in an MOBA game, after a user completes an event in the game, a virtual mark is put on a virtual object controlled by the user at a position where the event is completed, other users can know that the event is completed by the user corresponding to the virtual mark through the virtual mark, and the user can also capture a screenshot in a virtual scene to share the event completed by the user to other users.
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic diagram of an implementation environment of a display method of a virtual mark provided in an embodiment of the present application, and referring to fig. 1, the implementation environment includes: a first terminal 120, a second terminal 140, and a server 160.
The first terminal 120 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The first terminal 120 may be installed and run with an application program that supports displaying a virtual scene. The application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online Battle Arena game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type live game. The first terminal 120 may be a terminal used by a first user, who uses the first terminal 120 to operate a first virtual object located in a virtual scene for activities including, but not limited to: adjusting at least one of body posture, walking, running, riding, jumping, attacking, releasing virtual skills. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 140 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc. The second terminal 140 may be installed and run with an application program supporting a virtual scene. The application program can be any one of an FPS, a third person named shooting game, an MOBA, a virtual reality application program, a three-dimensional map program, a military simulation program or a multi-person gunfight survival game. The second terminal 160 may be a terminal used by a second user, who uses the second terminal 160 to operate a second virtual object located in the virtual scene for activities including, but not limited to: adjusting at least one of body posture, walking, running, riding, jumping, attacking, releasing virtual skills. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual object controlled by the first terminal 120 and the second virtual object controlled by the second terminal 140 are in the same virtual scene, and the first virtual object may interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be in a hostile relationship, for example, the first virtual object and the second virtual object may belong to different teams and organizations, and the hostile virtual objects may interact with each other in a competitive manner by mutually enabling virtual skills.
In some embodiments, the first virtual object and the second virtual object may be in a teammate relationship, for example, the first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 140 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 140 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 140. The device types of the first terminal 120 and the second terminal 140 are the same or different, and include: at least one of a smartphone, a tablet, a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 140 may be smart phones, or other handheld portable gaming devices. The following embodiments are exemplified by using a terminal as a smart phone.
The server 160 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The server 160 is used to provide background services for applications that support the display of virtual scenes. The first terminal 120 and the second terminal 140 may establish a network connection with the server 160 by way of a wired or wireless network.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
In the embodiment of the application, the terminal is used for receiving the operation of the user, displaying the data returned by the server, and the server performs background data processing based on the operation of the user and sends the processed data to the terminal. In other possible embodiments, the technical solution provided in the present application may also be implemented by a terminal or a server as an execution subject. Fig. 2 is a flowchart of a display method of a virtual mark provided in an embodiment of the present application, and fig. 3 is a logic diagram of the display method of a virtual mark provided in the embodiment of the present application, referring to fig. 2 and fig. 3, the method includes:
201. in response to a countermeasure action occurring between a controlled virtual object in the virtual scene and a first virtual object, the server determines a remaining capacity value of the first virtual object, the controlled virtual object being a virtual object controlled by an end user.
Wherein the first virtual object may be a first user-controlled virtual object, and the number of the first virtual objects may be one or more. The first virtual object may be antagonistic to a controlled virtual object controlled by an end user in the virtual scene. The antagonistic behavior may reduce the ability values of the first virtual object and the controlled virtual object.
In one possible implementation, the server may determine the remaining capacity values of the first virtual object and the controlled virtual object in real time during the antagonistic behavior of the first virtual object and the controlled virtual object. In response to the fact that the remaining capacity value of the controlled virtual object is lower than the first target capacity value, the server can judge that the controlled virtual object is eliminated, and subsequent steps can not be executed; accordingly, in response to the remaining capability value of the controlled virtual object being higher than the first target capability value, the server may perform the subsequent steps. Taking the MOBA-type game as an example, the controlled virtual object may be "hero" controlled by the end user, the first virtual object may be "hero" controlled by the first user, and the "hero" controlled by the end user may be "confronted" with the "hero" controlled by the first user, for example, by mutually starting virtual skills to reduce the remaining ability value of the other party, which may also be referred to as "blood volume" in the MOBA-type game. The server may determine the "blood volume" of the first virtual object and the controlled virtual object in real time. In response to the "blood volume" of the controlled virtual object being lower than the first target capability value, e.g., 0, the server may determine that the controlled virtual object is obsolete, and may not perform further steps.
202. And responding to the residual capacity value meeting the first target condition, and acquiring a virtual mark of the controlled virtual object by the server, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object.
The virtual mark can be a mark for showing personal characteristics of a user, and the virtual mark can be obtained by providing materials by a server and combining the materials by the user; or the virtual mark may be designed and uploaded to the server by the end user, which is not limited in the embodiment of the present application. In a MOBA-type game, the virtual indicia may be winning indicia. The winning mark can be an object thrown by the hero controlled by the end user when defeating the hero controlled by other users, and the object can have various expression forms, such as a hat with a user identifier or a cake. The object may have a decorative effect by which other users may determine that the "hero" controlled by the end user defeats the "hero" controlled by the other users.
In a possible implementation manner, in response to that the remaining capability value of the first virtual object is lower than the first target capability value, the server may perform a search in the storage space according to the user identifier of the end user, and obtain a virtual tag corresponding to the user identifier, that is, a virtual tag of the controlled virtual object. Taking the application as an MOBA game as an example, the terminal user may select a virtual tag for the game before the game starts, and after the terminal user selects the virtual tag, the terminal may bind the virtual tag selected by the terminal user and the user identifier and send the bound virtual tag to the server. In a subsequent game process, in response to the remaining ability value of the first virtual object being lower than the first target ability value, the server may query the corresponding virtual tag according to the user identification. The remaining ability value of the first virtual object meeting the first target condition may mean that the blood volume of "hero" controlled by another user is reduced to 0. In this implementation, the server may only store the correspondence between the user identifier and the virtual tag, and does not need to store the correspondence between the first virtual object and the virtual tag.
In one possible implementation manner, in response to that the remaining capacity value of the first virtual object is lower than the first target capacity value, the server may perform a lookup in the memory according to the identifier of the controlled virtual object, and obtain a virtual tag corresponding to the identifier of the controlled virtual object. Also taking the application as an example of a MOBA-like game, the user may configure different virtual badges for different types of controlled virtual objects. Before the game starts, the terminal can directly load the virtual mark configured by the user according to the controlled virtual object selected by the user, and binds the virtual mark selected by the user and the identifier of the controlled virtual object and sends the bound virtual mark and the identifier to the server. In response to the remaining ability value of the first virtual object being lower than the first target ability value, the server may query the corresponding virtual tag according to the identity of the controlled virtual object during the subsequent game. In this implementation manner, the server may query the corresponding virtual tag according to the identifier of the first virtual object, which means that the terminal user may configure different virtual tags for different first virtual objects, thereby improving the personalization degree of the virtual tag.
In one possible implementation, in response to the remaining capability value of the first virtual object being lower than the first target capability value, the server may determine a corresponding virtual tag according to an instruction of the user. Also taking the application as a MOBA game as an example, the user can upload a plurality of virtual tags to the server in advance. In response to the remaining capability value of the first virtual object being lower than the first target capability value, the server may query the plurality of virtual tags according to the user identifier, send the plurality of virtual tags to the terminal, the terminal receives the plurality of virtual tags and displays the plurality of virtual tags to the user, and the user may select any one of the virtual tags through the terminal. The terminal can send the virtual mark selected by the user to the server, and the server takes the virtual mark as the virtual mark of the controlled virtual object. In this implementation manner, when the remaining capacity value of the first virtual object meets the first target condition, the terminal may display a plurality of virtual marks for the user to select, and the user may select a virtual mark most suitable for the current virtual scene, thereby improving the game experience of the user.
It should be noted that step 203 and 205, which are described below, are optional steps, and if the server stores a target position at which the virtual marker is displayed in the virtual scene, then after step 202, the server may directly perform the step of displaying the virtual marker in steps 207 and 208, and of course, may also perform step 203 or step 204, further determine whether to display the virtual marker according to a distance between the controlled virtual object and the first virtual object, or whether the controlled virtual object and another virtual object have a competing behavior, and may also perform step 206 to determine the target flight speed of the virtual marker. If the target position of the virtual marker is not stored in the server, the server may execute step 203 and step 205 described below to determine the target position of the virtual marker displayed in the virtual scene. The execution sequence of the steps is not limited in the embodiment of the present application.
203. The server acquires first position information of the controlled virtual object in the virtual scene and second position information of the first virtual object in the virtual scene, and acquires the distance between the controlled virtual object and the first virtual object based on the first position information and the second position information.
In one possible embodiment, the server may query the first coordinate of the controlled virtual object, i.e. the first position information, according to the identifier of the controlled virtual object. The server may query the second coordinate of the first virtual object, i.e. the second location information, according to the identifier of the first virtual object. The server may determine a distance between the controlled virtual object and the first virtual object based on the first coordinate and the second coordinate.
After step 203, the server may determine whether the distance between the controlled virtual object and the first virtual object is within the target distance range, and in response to that the distance between the controlled virtual object and the first virtual object is within the target distance range and the server stores therein a target position at which the virtual mark is displayed in the virtual scene, the server may directly perform the step of displaying the virtual mark of steps 207 and 208; in response to the distance between the controlled virtual object and the first virtual object not being within the target distance range, the server may stop performing the displaying step of the subsequent virtual marker. Taking the MOBA-type game as an example, after the "hero" controlled by the end user defeats the "hero" controlled by the first user, the server may determine whether to display the virtual mark according to a distance between the "hero" controlled by the end user and the "hero" controlled by the first user, wherein the first user is the user controlling the first virtual object. In response to the distance between the end user controlled "hero" and the first user controlled "hero" being within the target distance range, the terminal may perform the displaying step of the virtual mark of steps 207 and 208. In response to the distance between the end user controlled "hero" and the first user controlled "hero" not being within the target distance range, the server may not perform the subsequent displaying step of the virtual mark. In this implementation, if the distance between the end user controlled "hero" and the first user controlled "hero" is within the target distance range, indicating that the end user may wish to communicate information to other users via the virtual mark, then the server may perform the subsequent step of displaying the virtual mark; if the distance between the hero controlled by the end user and the hero controlled by the first user is not within the target distance range, it indicates that the end user does not wish to transmit information to other users through the virtual mark, the end user may control the controlled virtual object to complete other virtual events, and the server may not perform the subsequent step of displaying the virtual mark. That is, when the user does not need to transmit information through the virtual mark, the server may not perform the subsequent display step, saving computing resources.
In addition, if the server does not store the target position of the virtual marker displayed in the virtual scene, after step 203, the server may further continue to execute steps 204 and 205 to determine the target display position. Of course, if the server stores the target position at which the virtual mark is displayed in the virtual scene, the server may execute step 204 after step 203 to further determine whether to display the virtual mark.
204. The server detects whether the controlled virtual object is confronted with other virtual objects in real time.
In one possible implementation, the server may detect the remaining capacity value of the controlled virtual object in real time. In response to the remaining capacity value of the controlled virtual object decreasing, the server may determine that the controlled virtual object is confronted with other virtual objects; in response to the remaining capability value of the controlled virtual object not changing or rising, the server may determine that the controlled virtual object is not antagonistic to other virtual objects. Taking an MOBA game as an example, the server can detect the blood volume of the hero controlled by the terminal user in real time, and in response to the server detecting that the blood volume of the hero controlled by the terminal user is reduced, the server can determine that the hero controlled by the terminal user and the hero controlled by other users have counteraction; in response to the server detecting that the "blood volume" of the end-user controlled "hero" is unchanged or increases, the server may determine that the end-user controlled "hero" is not antagonistic to other user controlled "hero".
Further, the server may detect in real time whether the controlled virtual object is located within a virtual attack range of other virtual objects, where the virtual attack range may include an action range of a virtual skill of the virtual object and an action range of a virtual prop of the virtual object. In response to the controlled virtual object not being within the virtual attack range of the other virtual objects, the server may determine that the controlled virtual object is not behaving counter to the other virtual objects. In response to the controlled virtual object being within the virtual attack range of other virtual objects, the server may detect in real time whether the first virtual object has launched a virtual skill or attacks the controlled virtual object through the virtual prop. In response to the first virtual object launching a virtual skill or attacking the controlled virtual object through the virtual prop and causing the remaining ability value of the controlled virtual object to decrease, the server may determine that the controlled virtual object is confronted with other virtual objects. Taking an MOBA game as an example, the server can detect whether the hero controlled by the terminal user is located in the attack range of the hero controlled by other users in real time, wherein the attack range can be the skill action range of the hero controlled by other users, the action range of the hero controlled by other users and common attack, and the action range of the hero controlled by other users carrying game props. In response to the end-user controlled "hero" not being within the attack range of the other user controlled "hero", the server may determine that the end-user controlled "hero" is not antagonistic to the other user controlled "hero". In response to that the hero controlled by the terminal user is located in the attack range of the hero controlled by other users, the server can detect whether the hero controlled by other users passes through skills, common attacks or game props to cause damage to the hero controlled by the terminal user in real time, so that the blood volume of the hero controlled by the terminal user is reduced. In response to a decrease in the "blood volume" of the end-user controlled "hero," the server may determine that the end-user controlled "hero" is antagonistic to other user controlled "hero.
In one possible implementation manner, the server may detect whether the controlled virtual object causes the remaining capability value of the other virtual object to decrease in real time, and in response to the controlled virtual object causing the remaining capability value of the other virtual object to decrease, the server may determine that the controlled virtual object and the other virtual object perform a confrontation behavior; in response to the controlled virtual object not causing a decrease in the remaining capacity values of the other virtual objects, the server may determine that the controlled virtual object is not antagonistic to the other virtual objects. Taking an MOBA game as an example, the server can detect whether the hero controlled by the terminal user passes the skill, the common attack or the game prop damages the hero controlled by other users in real time, so that the blood volume of the hero controlled by other users is reduced. In response to the "hero" controlled by the end user causing harm to the "hero" controlled by the other user, the server may determine that the "hero" controlled by the end user is antagonistic to the "hero" controlled by the other user; in response to the end-user controlled "hero" not causing harm to the other user controlled "hero", the server may determine that the end-user controlled "hero" is not antagonistic to the other user controlled "hero".
Further, the server may detect in real time whether there are other virtual objects in a virtual attack range of the controlled virtual object, where the virtual attack range may include an action range of a virtual skill of the virtual object and an action range of a virtual prop of the virtual object. In response to the controlled virtual object not having other virtual objects within the virtual attack range, the server may determine that the controlled virtual object is not behaving counter to the other virtual objects. In response to the fact that other virtual objects exist in the virtual attack range of the controlled virtual object, the server can detect whether the controlled virtual object starts a virtual skill or attacks the other virtual objects through the virtual prop in real time. In response to the controlled virtual object launching virtual skills or the remaining ability value of other virtual objects being decreased due to the virtual prop attacking other virtual objects, the server may determine that the controlled virtual object and other virtual objects are confronted. Taking an MOBA game as an example, the server can detect whether the hero controlled by other users exists in the attacking range of the hero controlled by the terminal user in real time, wherein the attacking range can be the skill action range of the hero controlled by the terminal user, the action range of the hero controlled by the terminal user and the common attack, and the action range of the hero controlled by the terminal user carrying game props. In response to the end-user controlled "hero" not existing within the scope of the end-user controlled "hero" attack, the server may determine that the end-user controlled "hero" is not antagonistic to the other user controlled "hero". In response to the fact that the hero controlled by other users exists in the attack range of the hero controlled by the terminal user, the server can detect whether the hero controlled by the terminal user passes through skills, common attacks or game props to cause damage to the hero controlled by other users in real time, and the blood volume of the hero controlled by other users is reduced. In response to a decrease in the "blood volume" of the other user-controlled "hero," the server may determine that the end-user-controlled "hero" is antagonistic to the other user-controlled "hero.
Of course, the server may also determine whether the controlled virtual object is confronted with other virtual objects by combining the above two embodiments. That is, the server may detect whether the remaining capability value of the controlled virtual object is decreased and whether the controlled virtual object causes the remaining capability values of other virtual objects to be decreased, and when it is satisfied that the remaining capability value of the controlled virtual object is not decreased and the controlled virtual object does not cause the remaining capability values of other virtual objects to be decreased, the server may determine that the controlled virtual object and the other virtual objects do not generate an antagonistic behavior, which is not limited in this embodiment of the present application.
After step 204, in response to the controlled virtual object and other virtual objects not having any antagonistic behavior, and the server stores the target position of the virtual mark displayed in the virtual scene, the server may directly perform the displaying steps of the virtual mark in steps 207 and 208. In response to the controlled virtual object and the other virtual objects competing, the server may stop performing the subsequent virtual mark displaying step. Taking MOBA-type games as an example, after the hero controlled by the end user defeats the hero controlled by the first user, the server may determine whether the hero controlled by the end user competes with the hero controlled by the other users. In response to the "hero" controlled by the terminal user and the "hero" controlled by the other user not having countermeasures, the terminal may perform the displaying step of the virtual mark of steps 207 and 208. In response to the competing behavior of the end-user controlled "hero" and other user controlled "hero", the server may not perform the subsequent displaying step of the virtual mark. In this implementation, if the "hero" controlled by the end user and the "hero" controlled by other users do not have countermeasures, and the display of the virtual mark in the virtual scene does not affect the "hero" controlled by the end user, the server may perform the subsequent display step of the virtual mark; if the "hero" controlled by the end user and the "hero" controlled by other users are confronted, the display of the virtual marker in the virtual scene may affect the "battle" between the "hero" controlled by the end user and the "hero" controlled by other users, and then the server may not perform the subsequent display step of the virtual marker. That is, the server may not perform subsequent display steps when "battle" between "hero" controlled by the end user and "hero" controlled by other users may be affected, avoiding poor gaming experience for the end user.
205. And in response to the fact that the distance between the controlled virtual object and the first virtual object is within the target distance range and the controlled virtual object does not act against other virtual objects within the target duration, the server determines the target position according to the distance between the controlled virtual object and the first virtual object.
The target distance range and the target duration can be set according to actual conditions. For example, the target distance range may be a distance range of 1m-2m in the virtual scene, and the target duration may be 2 s.
In one possible embodiment, the server may determine in real time a relationship between the distance between the controlled virtual object and the first virtual object and the target distance range. In response to the distance between the controlled virtual object and the first virtual object being within the target distance range, the server may time from determining that the controlled virtual object is not competing with other virtual objects. In response to the controlled virtual object not competing with other virtual objects within the target duration, the server may determine a first area, the center of which is the position of the first virtual object in the virtual scene, and the radius of which is a first target radius. In response to the distance between the controlled virtual object and the first virtual object being smaller than the first target radius, the server may determine a ray passing through the position of the first virtual object as an end point, and determine an intersection of the ray and the first region boundary line as the target position. Referring to fig. 4, 401 is a first area, 402 is a position of the first virtual object in the virtual scene, 403 is a position of the controlled virtual object in the virtual scene, 404 is a target position, and r is a first target radius.
In one possible embodiment, the server may determine in real time a relationship between the distance between the controlled virtual object and the first virtual object and the target distance range. In response to the distance between the controlled virtual object and the first virtual object being within the target distance range, the server may time from determining that the controlled virtual object is not competing with other virtual objects. In response to the controlled virtual object not competing with other virtual objects within the target duration, the server may determine a first area, the center of which is the position of the first virtual object in the virtual scene, and the radius of which is a first target radius. In response to the distance between the controlled virtual object and the first virtual object being greater than the first target radius and less than the second target radius, the server may determine a straight line passing through the location of the controlled virtual object and the location of the first virtual object, determine an intersection of the straight line and the first area boundary line as the target location, and determine the second target radius to be less than the first target radius. Referring to fig. 5, 501 is a first area, 502 is a position of the first virtual object in the virtual scene, 503 is a position of the controlled virtual object in the virtual scene, 504 is a target position, R is a first target radius, and R is a second target radius.
It should be noted that step 206 below is an optional step, and if the server stores the target airspeed of the virtual marker, the server may skip step 206 and directly execute steps 207 and 208. If the server does not have a virtually-tagged target airspeed stored therein, the server may determine the virtually-tagged target airspeed via step 206. Of course, step 206 may not be executed, and steps 207 and 208 may be executed directly after step 205, and the execution order of the steps is not limited in the embodiment of the present application.
206. And the server determines the target flight speed of the virtual mark according to the distance between the controlled virtual object and the first virtual object, wherein the target flight speed is positively correlated with the distance between the controlled virtual object and the first virtual object.
In one possible embodiment, the server may determine the target flying speed of the virtual marker according to the corresponding relationship between the distance between the controlled virtual object and the first virtual object and the target flying speed. The target flying speed is positively correlated with the distance between the controlled virtual object and the first virtual object, that is, the farther the distance between the controlled virtual object and the first virtual object is, the larger the target flying speed is; the closer the distance between the controlled virtual object and the first virtual object, the smaller the target flying speed.
In one possible embodiment, the server may set a target time of flight that represents the time that the virtual marker flies from the controlled virtual object to the target location. The server may determine the target flight speed based on a distance between the controlled virtual object and the first virtual object and the target flight time. In this implementation, as long as the distance between the controlled virtual object and the first virtual object is within the target distance range, the virtual marker will fly from the position of the controlled virtual object to the target position within the same time regardless of the distance between the two.
Through step 206, the server may determine the target airspeed of the virtual marker, and may obtain a better display effect during the subsequent displaying of the virtual marker.
207. And the server sends the virtual mark, the target position and the target flying speed to the terminal.
In one possible embodiment, if the server does not store the target flight speed of the virtual marker or the server does not execute step 206, the server may send the virtual marker and the target position to the terminal.
208. And the terminal receives the virtual mark, the target position and the target flying speed sent by the server, and displays the virtual mark at the target position of the virtual scene.
In a possible implementation manner, after the terminal receives the virtual mark, the target position and the target flying speed sent by the server, the terminal may control the virtual mark to fly from the position of the controlled virtual object to the target position at the target flying speed in the virtual scene, and display the flying process of the virtual mark. For example, the terminal may determine the flight trajectory of the virtual marker according to the position of the controlled virtual object in the virtual scene and the target position, and determine the position of the virtual marker in each frame of image according to the target flight speed and the current Frame Per Second (FPS). The terminal can display the flying process of the virtual mark flying from the controlled virtual object to the target position in the virtual scene at the target flying speed according to the flying track of the virtual mark and the position of the virtual mark in each frame of image. The terminal may then maintain a display of the virtual marker at the target location, which other user-controlled virtual objects may see as they pass by the target location.
In a possible embodiment, if the server does not store the target flight speed of the virtual marker or the server does not execute step 206, the terminal may only receive the virtual marker and the target position sent by the server. The terminal can control the virtual mark to fly from the position of the controlled virtual object to the target position in the virtual scene, and display the flying process of the controlled virtual object. Referring to fig. 6, 601 is a first virtual object, 602 is a virtual marker, and 603 is a controlled virtual object. The terminal may then maintain a display of the virtual marker at the target location, which other user-controlled virtual objects may see as they pass by the target location. Taking the MOBA game as an example, referring to fig. 7, 701 is "hero" controlled by the end user, "hero" controlled by the first user "702" and "hero" controlled by the first user "703 are virtual marks. After the hero controlled by the terminal user defeats the hero controlled by the first user, the terminal can control the hero controlled by the terminal user to "throw" the virtual mark, namely control the virtual mark to fly to a target position from the hero controlled by the terminal user. Other users may know that the "hero" controlled by the end user defeats the "hero" controlled by the first user after seeing the virtual mark.
In addition to step 201 and 208, the server may execute step 209, where step 209 provides a virtual mark disappearance condition.
209. And responding to the residual capacity value meeting the second target condition, and not displaying the virtual mark by the terminal.
In one possible implementation, the server may detect, in real-time, a remaining capability value of the first virtual object, and in response to the remaining capability value being greater than the first target capability value, the server may determine that the first virtual object is in an active state. The server may send an instruction to stop displaying the virtual marker to the terminal, where the instruction may carry an identifier of the virtual marker, and the terminal may not display the virtual marker corresponding to the identifier of the virtual marker according to the identifier of the virtual marker. Taking the MOBA game as an example, after the hero controlled by the first user is "revived", the server may determine that the hero controlled by the first user is in an active state, the server may send an instruction to the terminal, and the terminal receives the instruction and stops displaying the virtual mark. That is, the terminal can control the virtual mark displayed at the target position not to be displayed any more. In this implementation, the first user-controlled "hero" is "revived" to indicate that the end-user-controlled "hero" and the first user-controlled "hero" can again develop a new confrontation. By not displaying the virtual mark, other users can know that the hero can be controlled to be confronted with the hero controlled by the first user, and the efficiency of human-computer interaction is higher.
In addition to step 209, the terminal may not display the virtual marker after the target time interval. In this implementation, display resources of the terminal may be saved.
By the method provided by the embodiment of the application, after a user completes a certain virtual event, for example, a certain first virtual object is defeated or destroyed, the virtual marker can be set at a target position near the defeated or destroyed first virtual object. When the virtual object controlled by other users is close to the target position, the virtual mark can be observed, so that the completion of the virtual event is known, and the users who complete the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved. In addition, after the residual capacity value of the first virtual object meets the second target condition, the terminal can not display the virtual mark, so that when the virtual object controlled by other users is close to the target position, the virtual event can be determined to be not completed, and the efficiency of human-computer interaction is higher.
In addition to the above step 201-209, another display method of the virtual mark is provided in the embodiment of the present application. Fig. 8 is a flowchart of a display method of a virtual mark provided in an embodiment of the present application, and fig. 9 is a display logic diagram of a virtual mark provided in an embodiment of the present application, referring to fig. 8 and fig. 9, the method includes:
801. in response to a countermeasure action occurring between a controlled virtual object in the virtual scene and a first virtual object, the server determines a remaining capacity value of the first virtual object, the controlled virtual object being a virtual object controlled by an end user.
The first virtual object may be a server-controlled virtual object or a virtual building in a virtual scene, and the number of the first virtual objects may be one or more. The controlled virtual object may gain some virtual gain by destroying the first virtual object, such as an increase in the offensive power or an increase in virtual skill injury; or the controlled virtual object may win the game by destroying the first virtual object. The first virtual object may be antagonistic to a controlled virtual object controlled by an end user in the virtual scene. The antagonistic behavior may reduce the ability values of the first virtual object and the controlled virtual object.
In one possible implementation manner, the server may determine the remaining capacity values of the first virtual object and the controlled virtual object in real time during the antagonism behavior of the first virtual object and the controlled virtual object. In response to the fact that the remaining capacity value of the controlled virtual object is lower than the first target capacity value, the server can judge that the controlled virtual object is eliminated, and subsequent steps can not be executed; accordingly, in response to the remaining capability value of the controlled virtual object being higher than the first target capability value, the server may perform the subsequent steps. Taking MOBA games as an example, the controlled virtual object may be "hero" controlled by the end user, the first virtual object may be "monster" controlled by the server or a virtual building in the virtual scene, "hero" controlled by the end user may "compete" with "monster" controlled by the server, and "hero" controlled by the end user may also attack the virtual building in the virtual scene. Such as reducing the residual ability value of "monsters" or virtual buildings by launching virtual skills or by "general attacks," which can also be referred to as "blood volume" in MOBA-like games. The server may determine the "blood volume" of the first virtual object and the controlled virtual object in real time. In response to the "blood volume" of the controlled virtual object being lower than the first target capability value, e.g., 0, the server may determine that the controlled virtual object is obsolete, and may not perform further steps.
802. And responding to the residual capacity value meeting the first target condition, and acquiring a virtual mark of the controlled virtual object by the server, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object.
The virtual mark can be a mark for showing personal characteristics of a user, and the virtual mark can be obtained by providing materials by a server and combining the materials by the user; or the virtual mark may be designed and uploaded to the server by the end user, which is not limited in the embodiment of the present application. In a MOBA-type game, the virtual marker may be a win flag. The win flag may be a flag generated when the hero controlled by the end user defeats the strange controlled by the server, and the flag may have various flag representation forms, for example, a flag with a red flag surface or a blue flag surface. The banner may have a decorative effect by which other users may determine that the end-user controlled "hero" defeats the "monster" of the location.
In a possible implementation manner, in response to that the remaining capability value of the first virtual object is lower than the first target capability value, the server may perform a lookup in the memory according to the user identifier of the end user, and obtain a virtual tag corresponding to the user identifier, that is, a virtual tag of the controlled virtual object. Taking the application as an MOBA game as an example, the terminal user may select a virtual tag for the game before the game starts, and after the terminal user selects the virtual tag, the terminal may bind the virtual tag selected by the terminal user and the user identifier and send the bound virtual tag to the server. In a subsequent game process, in response to the remaining ability value of the first virtual object being lower than the first target ability value, the server may query the corresponding virtual tag according to the user identification. In this implementation, the server may only store the correspondence between the user identifier and the virtual tag, and does not need to store the correspondence between the first virtual object and the virtual tag.
In one possible implementation manner, in response to that the remaining capacity value of the first virtual object is lower than the first target capacity value, the server may perform a lookup in the storage space according to the identifier of the controlled virtual object, and obtain a virtual tag corresponding to the identifier of the controlled virtual object. Also taking the application as an example of a MOBA-like game, the user may configure different virtual badges for different types of controlled virtual objects. Before the game starts, the terminal can directly load the virtual mark configured by the user according to the controlled virtual object selected by the user, and binds the virtual mark selected by the user and the identifier of the controlled virtual object and sends the bound virtual mark and the identifier to the server. In response to the remaining ability value of the first virtual object being lower than the first target ability value, the server may query the corresponding virtual tag according to the identity of the controlled virtual object during the subsequent game. Wherein, the remaining ability value of the first virtual object meeting the first target condition may mean that the "strange" or the "blood volume" of the virtual building is reduced to 0. In this implementation manner, the server may query the corresponding virtual tag according to the identifier of the first virtual object, which means that the terminal user may configure different virtual tags for different first virtual objects, thereby improving the personalization degree of the virtual tag.
In one possible implementation, in response to the remaining capability value of the first virtual object being lower than the first target capability value, the server may determine a corresponding virtual tag according to an instruction of the user. Also taking the application as a MOBA game as an example, the user can upload a plurality of virtual tags to the server in advance. In response to the remaining capability value of the first virtual object being lower than the first target capability value, the server may query the plurality of virtual tags according to the user identifier, send the plurality of virtual tags to the terminal, the terminal receives the plurality of virtual tags and displays the plurality of virtual tags to the user, and the user may select any one of the virtual tags through the terminal. The terminal can send the virtual mark selected by the user to the server, and the server takes the virtual mark as the virtual mark of the controlled virtual object. In this implementation manner, when the remaining capacity value of the first virtual object meets the first target condition, the terminal may display a plurality of virtual marks for the user to select, and the user may select a virtual mark most suitable for the current virtual scene, thereby improving the game experience of the user.
803. And the server acquires a target position for displaying the virtual mark in the virtual scene according to the identifier of the first virtual object.
In a possible implementation manner, the server may store a corresponding relationship between the identifier of the first virtual object and the target position, and the server may directly obtain the target position of the virtual marker displayed in the virtual scene according to the corresponding relationship. In setting the target location, the technician may determine the target location based on the topography of the virtual scene and the impact on the virtual objects in the virtual scene. For example, a technician may set a target position on a virtual wall of a virtual scene, or set a virtual marker at a position where some virtual objects cannot pass through, so as to ensure that the display of the virtual marker does not affect the antagonistic behavior between subsequent virtual objects. Referring to fig. 10, 1001 is a controlled virtual object, 1002 is a virtual wall, and 1003 is a target position. Since the controlled virtual object 1001 cannot cross the virtual wall 1002, determining 1003 as the target position does not affect the antagonistic behavior between subsequent virtual objects.
804. And the server acquires the display orientation of the virtual mark in the virtual scene according to the identifier of the first virtual object.
In a possible implementation manner, the server may store a corresponding relationship between the identifier of the first virtual object and the display orientation, and the server may directly obtain the display orientation of the virtual marker displayed in the virtual scene according to the corresponding relationship. In setting the display orientation, the technician may determine the display orientation based on the topography of the virtual scene and the angle at which the user views the virtual scene. For example, a technician may set a display orientation at an angle that can be observed by a user on a virtual wall of a virtual scene, or set a display orientation at an angle that can be observed by the user at a position where some virtual objects cannot pass, so that it is ensured that the display of the virtual mark does not affect the countermeasures between subsequent virtual objects, and it is also ensured that the user can observe the virtual mark in the virtual scene.
It should be noted that the following step 805 is an optional step, and in response to determining that the first virtual object is a virtual building, the server may not perform step 805, and directly perform the step of transmitting the virtual mark, the target position, and the display orientation to the terminal; in response to determining that the first virtual object is a server-controlled virtual object, the server may perform step 805.
805. The server determines a virtual rank of the first virtual object.
The virtual level may represent the difficulty of the controlled virtual object against the first virtual object, and the higher the virtual level of the first virtual object is, the higher the difficulty of the controlled virtual object against the first virtual object is. The virtual level may be set according to an attribute of the first virtual object, which is not limited in this embodiment of the present application.
In a possible implementation manner, the server may perform a query in the corresponding storage space according to the identifier of the first virtual object, and obtain a virtual rank corresponding to the identifier of the first virtual object from the storage space. Since the virtual rating represents the difficulty of the controlled virtual confrontation with the first virtual object, the server can determine whether to display the virtual mark according to the difficulty of the controlled virtual confrontation with the first virtual object. Taking the MOBA-type game as an example, the "smith monster" is a large-scale neutral unit and has strong resistance, and after the "hero" controlled by the end user hits and kills the "smith monster", a strong gain effect, such as an "attack force increasing effect" or a "bluing speed increasing effect", can be obtained. The fact that the hero controlled by the terminal user clicks the ShishYeqiu can cost a large amount of effort, and the clicking and killing of the ShishYeqiu can have a large influence on the situation of the game, so that after the hero controlled by the terminal user clicks the ShishYeqiu, the server can judge and display a virtual mark, so that when the hero controlled by other users passes through the position of the ShishYeqiu, the server can know that the ShishYeqiu is clicked and killed, and can control the hero to finish other events. In the MOBA games, besides the 'Shishi monster' and the 'common monster' exist, strong gain effect cannot be achieved after the 'hero' controlled by the terminal user clicks the 'common monster', the 'common monster' cannot be greatly influenced by clicking the 'common monster', and then the server can judge that the virtual mark is not displayed.
806. And responding to the virtual grade meeting the target grade condition, and sending the virtual mark, the target position and the display orientation to the terminal by the server.
The virtual grade meeting the target grade condition may mean that the virtual grade is greater than or equal to the target grade. Of course, other conditions may be used, and this is not limited in the examples of the present application.
807. The terminal receives the virtual mark, the target position and the display direction sent by the server, and displays the virtual mark at the target position of the virtual scene according to the display direction.
In one possible implementation, after receiving the virtual mark, the target position and the display orientation sent by the server, the terminal may display the virtual mark in the display orientation at the target position of the virtual scene. The terminal may then maintain a display of the virtual marker at the target location, which other user-controlled virtual objects may see as they pass by the target location. Taking the MOBA game as an example, after the hero controlled by the terminal user's "defeat" the smith monster ", the terminal may display a win flag at the target location. Other users can know that the hero controlled by the end user defeats the historical site strange after seeing the victory flag. Referring to fig. 11, 1101 is "smith monster" and 1102 is "hero" controlled by the end user. In the virtual scenario shown in FIG. 11, "Stachy monster" 1101 has not been defeated by the end user controlled "hero" 1102. Referring to fig. 12, 1201 is the win flag, 1202 is the "hero" controlled by the end user. In the virtual scenario shown in FIG. 12, "Stachy monster" has been defeated by the end-user controlled "hero" 1202. The terminal may display a winning flag at 1201. Of course, the terminal may also display a win flag at the target location after the "hero" defeats the "defense tower" controlled by the end user. Other users can know that the hero controlled by the end user defeats the defense tower after seeing the victory flag. Referring to fig. 13, 1301 is a "defense tower" and 1302 is an "hero" controlled by the end user. In the virtual scenario shown in FIG. 13, the "defense tower" 1301 has not been defeated by the end-user controlled "hero" 1302. Referring to fig. 14, 1401 is the win flag, 1402 is the "hero" controlled by the end user. In the virtual scenario shown in FIG. 12, the "defense tower" has been defeated by the "hero" 1402, which is controlled by the end user. The terminal may display a win flag at 1401. In this implementation, other users can determine whether the first virtual object is defeated by observing the virtual mark, and the efficiency of human-computer interaction is higher compared with a simple notification mode.
808. And responding to the residual capacity value meeting the second target condition, and not displaying the virtual mark by the terminal.
In one possible implementation, the server may detect a remaining capability value of the first virtual object in real time, and when the remaining capability value is greater than the first target capability value, the server may determine that the first virtual object is in an active state. The server may send an instruction to stop displaying the virtual marker to the terminal, where the instruction may carry an identifier of the virtual marker, and the terminal may not display the virtual marker corresponding to the identifier of the virtual marker according to the identifier of the virtual marker. Taking the MOBA game as an example, after the first virtual object "strange" and "revival", the server may determine that the first virtual object "strange" is in an active state, the server may issue an instruction to the terminal, and the terminal receives the instruction and stops displaying the virtual mark. That is, the terminal can control the virtual mark displayed at the target position not to be displayed any more. In this implementation, after the first virtual object "monster" and "revival" indicate that the "hero" controlled by the end user and the first virtual object "monster" can be expanded to a new confrontation. By not displaying the virtual mark, other users can know that the hero can be controlled to confront with the strange object of the first virtual object, and the efficiency of human-computer interaction is higher.
In addition to step 808, the terminal may not display the virtual mark after the target time interval. In this implementation, display resources of the terminal may be saved.
By the method provided by the embodiment of the application, after a user completes a certain virtual event, for example, a certain first virtual object is defeated or destroyed, the virtual marker can be set at a target position near the defeated or destroyed first virtual object. When the virtual object controlled by other users is close to the target position, the virtual mark can be observed, so that the completion of the virtual event is known, and the users who complete the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved. In addition, after the residual capacity value of the first virtual object meets the second target condition, the terminal can not display the virtual mark, so that when the virtual object controlled by other users is close to the target position, the virtual event can be determined to be not completed, and the efficiency of human-computer interaction is higher.
Fig. 15 is a flowchart of a display method of a virtual mark according to an embodiment of the present application, and referring to fig. 15, the method includes:
1501. and responding to the antagonistic action between the controlled virtual object in the virtual scene and the first virtual object, and determining the residual capacity value of the first virtual object, wherein the controlled virtual object is a virtual object controlled by the end user.
1502. And responding to the residual capacity value meeting the first target condition, and acquiring a virtual mark of the controlled virtual object, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object.
1503. On the target position of the virtual scene, a virtual marker is displayed.
By the method provided by the embodiment of the application, after a user completes a certain virtual event, for example, a certain first virtual object is defeated or destroyed, the virtual marker can be set at a target position near the defeated or destroyed first virtual object. When the virtual object controlled by other users is close to the target position, the virtual mark can be observed, so that the completion of the virtual event is known, and the users who complete the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
first position information of a controlled virtual object in a virtual scene and second position information of the first virtual object in the virtual scene are obtained.
And acquiring the distance between the controlled virtual object and the first virtual object based on the first position information and the second position information.
In response to the distance being within the target distance range, the step of displaying a virtual marker at the target location of the virtual scene is performed.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and detecting whether the controlled virtual object generates antagonistic behavior with other virtual objects in real time, and responding to the fact that the controlled virtual object does not generate antagonistic behavior with other virtual objects within the target time length, and executing the step of displaying the virtual mark on the target position of the virtual scene.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and determining the target position according to the distance between the controlled virtual object and the first virtual object.
In one possible embodiment, determining the target position based on the distance between the controlled virtual object and the first virtual object comprises:
and determining a first area, wherein the center of the first area is the position of the first virtual object in the virtual scene, and the radius is the first target radius.
And in response to the distance being smaller than the first target radius, determining the ray passing through the position of the first virtual object by taking the position of the controlled virtual object as an end point.
The intersection of the ray and the first region boundary line is determined as the target position.
In one possible embodiment, determining the target position based on the distance between the controlled virtual object and the first virtual object comprises:
and determining a first area, wherein the center of the first area is the position in the virtual scene of the first virtual object, and the radius is the first target radius.
And determining a straight line passing through the position of the controlled virtual object and the position of the first virtual object in response to the distance being greater than the first target radius and smaller than the second target radius. The second target radius is greater than the first target radius.
The intersection of the straight line and the first area boundary line is determined as the target position.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and determining the target flying speed of the virtual marker according to the distance between the controlled virtual object and the first virtual object, wherein the target flying speed is positively correlated with the distance between the controlled virtual object and the first virtual object.
Displaying a virtual marker at a target location of a virtual scene includes:
and displaying a flying process of the virtual marker flying from the controlled virtual object to the target position at the target flying speed on the virtual scene.
In one possible embodiment, after displaying the virtual marker at the target position of the virtual scene, the method further comprises:
in response to the remaining capacity value meeting the second target condition, the virtual mark is not displayed.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and acquiring the target position of the virtual mark displayed in the virtual scene according to the identifier of the first virtual object.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and acquiring the display orientation of the virtual mark displayed in the virtual scene according to the identification of the first virtual object.
Displaying a virtual marker at a target location of a virtual scene includes:
the virtual marker is displayed at the target position of the virtual scene in the display orientation.
In one possible embodiment, before displaying the virtual marker at the target position of the virtual scene, the method further comprises:
and determining the virtual grade of the first virtual object, and responding to the virtual grade meeting the target grade condition, and displaying the virtual mark on the target position of the virtual scene.
Fig. 16 is a schematic structural diagram of a display device of a virtual mark provided in an embodiment of the present application, and referring to fig. 16, the device includes: a remaining capability value determining unit 1601, an acquiring unit 1602, and a display unit 1603.
A residual ability value determining unit 1601, configured to determine a residual ability value of a first virtual object in response to a countermeasure action occurring between a controlled virtual object in a virtual scene and the first virtual object, the controlled virtual object being a virtual object controlled by an end user.
An obtaining unit 1602, configured to obtain a virtual tag of the controlled virtual object in response to the remaining capability value meeting the first target condition, where the virtual tag is used to indicate that the first virtual object is defeated by the controlled virtual object.
A display unit 1603 for displaying a virtual marker at a target position of the virtual scene.
In one possible embodiment, the apparatus further comprises:
the position information acquisition unit is used for acquiring first position information of the controlled virtual object in the virtual scene and second position information of the first virtual object in the virtual scene.
And the distance acquisition unit is used for acquiring the distance between the controlled virtual object and the first virtual object based on the first position information and the second position information.
And a display unit for performing a step of displaying a virtual marker at a target position of the virtual scene in response to the distance being within the target distance range.
In one possible embodiment, the apparatus further comprises:
and the real-time detection unit is used for detecting whether the controlled virtual object generates antagonistic behavior with other virtual objects in real time, and responding to the fact that the controlled virtual object does not generate antagonistic behavior with other virtual objects within the target time length, and executing the step of displaying the virtual mark on the target position of the virtual scene.
In one possible embodiment, the apparatus further comprises:
and the target position determining unit is used for determining the target position according to the distance between the controlled virtual object and the first virtual object.
In a possible embodiment, the root target position determining unit is configured to determine a first area, a center of the first area is a position of the first virtual object in the virtual scene, and the radius is a first target radius. And in response to the distance being smaller than the first target radius, determining the ray passing through the position of the first virtual object by taking the position of the controlled virtual object as an end point. The intersection of the ray and the first region boundary line is determined as the target position.
In a possible embodiment, the target position determining unit is further configured to determine a first area, a center of the first area is a position in the virtual scene of the first virtual object, and the radius is a first target radius. And determining a straight line passing through the position of the controlled virtual object and the position of the first virtual object in response to the distance being greater than the first target radius and smaller than the second target radius. The second target radius is greater than the first target radius. The intersection of the straight line and the first area boundary line is determined as the target position.
In one possible embodiment, the apparatus further comprises:
and the target flying speed determining unit is used for determining the target flying speed of the virtual mark according to the distance between the controlled virtual object and the first virtual object, and the target flying speed is positively correlated with the distance between the controlled virtual object and the first virtual object.
The display unit is also used for displaying the flight process of the virtual marker flying from the controlled virtual object to the target position at the target flying speed on the virtual scene.
In one possible embodiment, the display unit is further configured to not display the virtual mark in response to the remaining ability value satisfying the second target condition.
In a possible implementation manner, the target position determining unit is further configured to obtain a target position where the virtual marker is displayed in the virtual scene according to the identifier of the first virtual object.
In one possible embodiment, the apparatus further comprises:
and the display orientation determining unit is used for acquiring the display orientation of the virtual mark displayed in the virtual scene according to the identification of the first virtual object.
And the display unit is used for displaying the virtual mark on the target position of the virtual scene according to the display orientation.
In one possible embodiment, the apparatus further comprises:
and a virtual grade determining unit for determining a virtual grade of the first virtual object, and in response to the virtual grade meeting the target grade condition, executing the step of displaying a virtual mark on the target position of the virtual scene.
By the device provided by the embodiment of the application, after a user completes a certain virtual event, for example, a certain first virtual object is defeated or destroyed, the virtual marker can be set at a target position near the defeated or destroyed first virtual object. When the virtual object controlled by other users is close to the target position, the virtual mark can be observed, so that the completion of the virtual event is known, and the users who complete the virtual event can be determined according to the style of the virtual mark, so that the efficiency of human-computer interaction is improved. In addition, after the residual capacity value of the first virtual object meets the second target condition, the terminal can not display the virtual mark, so that when the virtual object controlled by other users is close to the target position, the virtual event can be determined to be not completed, and the efficiency of human-computer interaction is higher.
The computer device in the embodiment of the present application may be implemented as a terminal, and a structure of the terminal is described first.
Fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1700 may be: a smartphone, a tablet, a laptop, or a desktop computer. Terminal 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1700 includes: one or more processors 1701 and one or more memories 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the method of displaying virtual badges provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuit 1704, display screen 1705, camera 1706, audio circuit 1707, positioning component 1708, and power source 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of terminal 1700; in other embodiments, display 1705 may be at least two, each disposed on a different surface of terminal 1700 or in a folded design; in still other embodiments, display 1705 may be a flexible display disposed on a curved surface or a folded surface of terminal 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used to locate the current geographic Location of the terminal 1700 to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be a Positioning component based on a GPS (Global Positioning System) in the united states, a beidou System in china, a greiner System in russia, or a galileo System in the european union.
Power supply 1709 is used to power the various components in terminal 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When power supply 1709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1717, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the display screen 1705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal 1700, and the gyro sensor 1712 may cooperate with the acceleration sensor 1711 to acquire a 3D motion of the user on the terminal 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be disposed on the side frames of terminal 1700 and/or underlying display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal 1700, the user's grip signal to the terminal 1700 can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed below the display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of terminal 1700. When a physical key or vendor Logo is provided on terminal 1700, fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the display screen 1705 is reduced. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
Proximity sensors 1716, also known as distance sensors, are typically disposed on the front panel of terminal 1700. Proximity sensor 1716 is used to gather the distance between the user and the front face of terminal 1700. In one embodiment, when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually reduced, processor 1701 controls display 1705 to switch from a bright screen state to a dark screen state; when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually increased, processor 1701 controls display 1705 to switch from the sniff state to the brighten state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The computer device in the embodiment of the present application may be implemented as a server, and a structure of the server is described below.
Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1801 and one or more memories 1802, where at least one instruction is stored in the one or more memories 1802, and the at least one instruction is loaded and executed by the one or more processors 1801 to implement the methods provided by the foregoing method embodiments. Of course, the server 1800 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server 1800 may also include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor to perform the method of displaying a virtual mark in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method of displaying a virtual marker, the method comprising:
determining a residual capacity value of a first virtual object in response to a confrontation between a controlled virtual object in a virtual scene and the first virtual object, wherein the controlled virtual object is a virtual object controlled by an end user;
in response to the residual capacity value meeting a first target condition, acquiring a virtual mark of the controlled virtual object, wherein the virtual mark is used for representing that the first virtual object is defeated by the controlled virtual object;
determining a first area, wherein the center of the first area is the position of the first virtual object in the virtual scene, and the radius of the first area is a first target radius;
in response to the fact that the distance between the controlled virtual object and the first virtual object is larger than a first target radius and smaller than a second target radius, determining a straight line passing through the position of the controlled virtual object and the position of the first virtual object, wherein the second target radius is larger than the first target radius;
determining an intersection point of the straight line and the first region boundary line as a target position;
displaying the virtual mark on a target position of a virtual scene, and maintaining the display of the virtual mark on the target position.
2. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
acquiring first position information of the controlled virtual object in the virtual scene and second position information of the first virtual object in the virtual scene;
acquiring a distance between the controlled virtual object and the first virtual object based on the first position information and the second position information;
and in response to the distance being within a target distance range, performing the step of displaying the virtual marker at a target position in the virtual scene.
3. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
and detecting whether the controlled virtual object generates antagonistic behavior with other virtual objects in real time, and responding to the fact that the controlled virtual object does not generate antagonistic behavior with other virtual objects within the target time length, and executing the step of displaying the virtual mark on the target position of the virtual scene.
4. The method of claim 1, further comprising:
in response to the fact that the distance is smaller than the first target radius, the position of the controlled virtual object is taken as an end point, and a ray passing through the position of the first virtual object is determined;
and determining the intersection point of the ray and the first region boundary line as the target position.
5. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
determining a target flight speed of the virtual marker according to a distance between the controlled virtual object and the first virtual object, wherein the target flight speed is positively correlated with the distance between the controlled virtual object and the first virtual object;
the displaying the virtual marker at the target position of the virtual scene comprises:
and displaying a flying process of the virtual marker flying from the controlled virtual object to the target position at a target flying speed on the virtual scene.
6. The method of claim 1, wherein after displaying the virtual marker at the target location of the virtual scene, the method further comprises:
in response to the remaining capacity value meeting a second target condition, not displaying the virtual marking.
7. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
and acquiring the target position of the virtual mark displayed in the virtual scene according to the identifier of the first virtual object.
8. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
acquiring the display orientation of the virtual mark displayed in the virtual scene according to the identifier of the first virtual object;
the displaying the virtual marker at the target position of the virtual scene comprises:
and displaying the virtual mark on the target position of the virtual scene according to the display orientation.
9. The method of claim 1, wherein prior to displaying the virtual marker at the target location of the virtual scene, the method further comprises:
determining a virtual grade of the first virtual object, and in response to the virtual grade meeting a target grade condition, executing the step of displaying the virtual mark at a target position of the virtual scene.
10. A display device for virtual signage, the device comprising:
a residual capacity value determining unit, configured to determine a residual capacity value of a first virtual object in a virtual scene in response to a countermeasure action occurring between a controlled virtual object and the first virtual object, where the controlled virtual object is a virtual object controlled by an end user;
an obtaining unit, configured to obtain a virtual tag of the controlled virtual object in response to that the remaining capability value meets a first target condition, where the virtual tag is used to indicate that the first virtual object is defeated by the controlled virtual object;
the display unit is used for displaying the virtual mark on a target position of a virtual scene and maintaining the display of the virtual mark on the target position;
the apparatus is further configured to:
determining a first area, wherein the center of the first area is the position of the first virtual object in the virtual scene, and the radius of the first area is a first target radius;
in response to the distance being greater than a first target radius and less than a second target radius, determining a straight line passing through the position of the controlled virtual object and the position of the first virtual object, wherein the second target radius is greater than the first target radius;
an intersection of the straight line and the first region boundary line is determined as the target position.
11. The apparatus of claim 10, further comprising:
a position information acquiring unit, configured to acquire first position information of the controlled virtual object in the virtual scene and second position information of the first virtual object in the virtual scene;
a distance acquisition unit configured to acquire a distance between the controlled virtual object and the first virtual object based on the first location information and the second location information;
a display unit, configured to execute the step of displaying the virtual mark at the target position of the virtual scene in response to the distance being within the target distance range.
12. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction, the instruction being loaded and executed by the one or more processors to perform operations performed by the method of displaying a virtual mark in accordance with any one of claims 1 to 9.
13. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the method for displaying a virtual mark according to any one of claims 1 to 9.
CN202010352056.9A 2020-04-28 2020-04-28 Virtual mark display method, device, equipment and storage medium Active CN111589113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010352056.9A CN111589113B (en) 2020-04-28 2020-04-28 Virtual mark display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010352056.9A CN111589113B (en) 2020-04-28 2020-04-28 Virtual mark display method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111589113A CN111589113A (en) 2020-08-28
CN111589113B true CN111589113B (en) 2021-12-31

Family

ID=72185561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010352056.9A Active CN111589113B (en) 2020-04-28 2020-04-28 Virtual mark display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111589113B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703571B (en) * 2021-08-24 2024-02-06 梁枫 Virtual reality man-machine interaction method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377117A (en) * 2012-04-18 2013-10-30 腾讯科技(深圳)有限公司 Automatic game testing method and automatic game testing device
CN106354418A (en) * 2016-11-16 2017-01-25 腾讯科技(深圳)有限公司 Manipulating method and device based on touch screen
CN108671543A (en) * 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 Labelled element display methods, computer equipment and storage medium in virtual scene
CN111035924A (en) * 2019-12-24 2020-04-21 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
EP3541942B1 (en) * 2016-11-17 2020-07-15 Instituto Superiore Di Sanita' Conjugative vectors selectable by fructooligosaccharides

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9520027B2 (en) * 2013-05-13 2016-12-13 Universal Entertainment Corporation Gaming machine, gaming system, and gaming method
CN110737414B (en) * 2018-07-20 2021-05-11 广东虚拟现实科技有限公司 Interactive display method, device, terminal equipment and storage medium
CN110433493B (en) * 2019-08-16 2023-05-30 腾讯科技(深圳)有限公司 Virtual object position marking method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377117A (en) * 2012-04-18 2013-10-30 腾讯科技(深圳)有限公司 Automatic game testing method and automatic game testing device
CN106354418A (en) * 2016-11-16 2017-01-25 腾讯科技(深圳)有限公司 Manipulating method and device based on touch screen
EP3541942B1 (en) * 2016-11-17 2020-07-15 Instituto Superiore Di Sanita' Conjugative vectors selectable by fructooligosaccharides
CN108671543A (en) * 2018-05-18 2018-10-19 腾讯科技(深圳)有限公司 Labelled element display methods, computer equipment and storage medium in virtual scene
CN111035924A (en) * 2019-12-24 2020-04-21 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LOL英雄联盟怎么显示攻击距离;听语者;《LOL英雄联盟怎么显示攻击距离》;20190411;第2页 *
听语者.教你怎么在LOL(英雄联盟)设置开局和胜利表情.《教你怎么在LOL(英雄联盟)设置开局和胜利表情》.2018, *
英雄联盟传奇对抗赛经典重现;无;《英雄联盟传奇对抗赛经典重现》;20191202;第1-4页 *
英雄联盟表情轮用不了怎么办;听语者;《英雄联盟表情轮用不了怎么办》;20191011;第1-5页 *

Also Published As

Publication number Publication date
CN111589113A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
CN111589127B (en) Control method, device and equipment of virtual role and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
JP7250403B2 (en) VIRTUAL SCENE DISPLAY METHOD, DEVICE, TERMINAL AND COMPUTER PROGRAM
CN111672114B (en) Target virtual object determination method, device, terminal and storage medium
CN111589140A (en) Virtual object control method, device, terminal and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111672099A (en) Information display method, device, equipment and storage medium in virtual scene
CN111760278A (en) Skill control display method, device, equipment and medium
CN113058264A (en) Virtual scene display method, virtual scene processing method, device and equipment
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN113117331A (en) Message sending method, device, terminal and medium in multi-person online battle program
CN112870699A (en) Information display method, device, equipment and medium in virtual environment
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium
CN113101656B (en) Virtual object control method, device, terminal and storage medium
CN112156471B (en) Skill selection method, device, equipment and storage medium of virtual object
CN111679879B (en) Display method and device of account segment bit information, terminal and readable storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111589113B (en) Virtual mark display method, device, equipment and storage medium
CN111589117A (en) Method, device, terminal and storage medium for displaying function options
CN112169321B (en) Mode determination method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027011

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant