CN112138390B - Object prompting method and device, computer equipment and storage medium - Google Patents

Object prompting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112138390B
CN112138390B CN202011017599.1A CN202011017599A CN112138390B CN 112138390 B CN112138390 B CN 112138390B CN 202011017599 A CN202011017599 A CN 202011017599A CN 112138390 B CN112138390 B CN 112138390B
Authority
CN
China
Prior art keywords
team
display
interface
virtual object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011017599.1A
Other languages
Chinese (zh)
Other versions
CN112138390A (en
Inventor
高放
林森
汪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210404475.1A priority Critical patent/CN114949854A/en
Priority to CN202011017599.1A priority patent/CN112138390B/en
Publication of CN112138390A publication Critical patent/CN112138390A/en
Application granted granted Critical
Publication of CN112138390B publication Critical patent/CN112138390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen

Abstract

The application relates to an object prompting method, an object prompting device, computer equipment and a storage medium, and relates to the technical field of networks. The method comprises the following steps: displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team; generating object prompt identifiers corresponding to the virtual objects based on the attribute information of the virtual objects; and overlapping and displaying the object prompt identification corresponding to each virtual object on the team display control. By the method, the team information in the team sub-interface can be displayed on the team display control, interface switching operation required for acquiring the team information in the team sub-interface is reduced, and waste of terminal electric quantity and processing resources caused by frequent interface switching operation is reduced.

Description

Object prompting method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of network technologies, and in particular, to an object prompting method and apparatus, a computer device, and a storage medium.
Background
In an application supporting a virtual scene, a team function is a common function. Through the team formation function, a plurality of user account numbers can form a team, and the virtual objects are controlled to cooperate in the virtual scene to carry out related activities respectively.
In the related art, in a virtual scene developed for a terminal with a small screen, such as a mobile terminal, a team sub-interface usually shares a display area with other sub-interfaces, and when a user wants to check the team condition, the team sub-interface needs to be triggered and displayed through a team display control to acquire the team condition.
However, in the above method, when the user needs to check the information of other interfaces at the same time, the interfaces need to be switched back and forth, and frequent interface switching operations lead to tedious user operations, thereby causing waste of the electric quantity and processing resources of the terminal.
Disclosure of Invention
The embodiment of the application provides an object prompting method, an object prompting device, computer equipment and a storage medium, which can reduce the waste of terminal electric quantity and processing resources caused by frequent interface switching operation, and the technical scheme is as follows:
in one aspect, an object prompting method is provided, where the method includes:
displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
generating object prompt identifiers corresponding to the virtual objects based on the attribute information of the virtual objects;
and overlapping and displaying the object prompt identification corresponding to each virtual object on the team display control.
In another aspect, an object prompting method is provided, where the method includes:
displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
in response to the target virtual object existing in the target team, overlaying and displaying an object prompt identification corresponding to the target virtual object on the team display control, wherein the object prompt identification is generated based on the attribute information of the target virtual object.
In another aspect, an object prompting apparatus is provided, the apparatus including:
the interface display module is used for displaying a target scene interface, and the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
the identification generation module is used for generating object prompt identifications corresponding to the virtual objects based on the attribute information of the virtual objects;
and the identification display module is used for displaying the object prompt identification corresponding to each virtual object on the team display control in an overlapping mode.
In one possible implementation manner, the identifier generating module includes:
the display attribute determining submodule is used for determining the display attributes of the object prompt identifiers corresponding to the virtual objects according to the mapping relation between the attribute information and the display attributes of the object prompt identifiers;
and the identifier generation submodule is used for generating the object prompt identifier corresponding to each virtual object based on the display attribute of the object prompt identifier corresponding to each virtual object.
In one possible implementation manner, the display attribute of the object hint identifier includes: at least one of a color of the object prompt identification, a shape of the object prompt identification, a pattern contained in the object prompt identification, and a size of the object prompt identification.
In one possible implementation, the attribute information includes: at least one of an object type of the corresponding virtual object, a responsibility type of the corresponding virtual object in the target team, a specified type attribute value of the corresponding virtual object, and a gain state of the corresponding virtual object.
In a possible implementation manner, an identification display position is marked on the group display control, and the identification display position corresponds to the display position of the attribute information of each virtual object included in the group sub-interface one by one;
the identification display module comprises:
the first position obtaining sub-module is used for obtaining a first display position, and the first display position is the display position of the attribute information of the first target virtual object in the team sub-interface; the first target virtual object is any one of the respective virtual objects;
a second position obtaining sub-module, configured to obtain a second display position based on the first display position, where the second display position is the identifier display position corresponding to the first display position;
and the identification display submodule is used for displaying the object prompt identification of the first target virtual object at the second display position.
In a possible implementation manner, the team sub-interface includes n display positions of the attribute information, and each display position of the attribute information has a corresponding first type position number; the team display control is marked with n identification display positions, each identification display position is provided with a corresponding second type position number, the first type position numbers and the second type position numbers are in one-to-one correspondence, and n is a positive integer;
the second position acquisition sub-module includes:
a number acquiring unit for acquiring a first type position number of the first display position;
a second position obtaining unit, configured to obtain the second display position based on a first type position number of the first display position, where the second type position number of the second display position corresponds to the first type position number of the first display position.
In one possible implementation, the apparatus further includes:
an identification removal module to remove the object hint identification of the first target virtual object from the team display control in response to the first target virtual object leaving the target team.
In a possible implementation manner, an identifier display position is marked on the team display control, and the identifier display position is fixedly set based on the attribute information of the virtual object;
the identification display module comprises:
a third position obtaining sub-module, configured to obtain, according to the attribute information of a second target virtual object, the identifier display position corresponding to an object prompt identifier of the second target virtual object, as a third display position; the second target virtual object is any one of the virtual objects, and the target object prompt identifier is an object prompt identifier generated based on the attribute information of the second target virtual object;
and the identifier display submodule is used for displaying the object prompt identifier of the second target virtual object at the third display position.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control;
the group display control is not overlapped with the other display controls, the group display control is not overlapped with a sub-interface display area, and the sub-interface display area comprises a display area corresponding to the group sub-interface and a display area corresponding to the other display controls and displaying the sub-interface.
In another aspect, an object prompting apparatus is provided, the apparatus including:
the interface display module is used for displaying a target scene interface, and the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
and the identification display module is used for responding to the existence of a target virtual object in the target team, and overlaying and displaying an object prompt identification corresponding to the target virtual object on the team display control, wherein the object prompt identification is generated based on the attribute information of the target virtual object.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control; the device further comprises:
the first display module is used for responding to the received touch operation based on the team display control and displaying the team sub-interface in a display area corresponding to the team display control;
the second display module is used for displaying a display sub-interface corresponding to a first display control in a display area corresponding to the first display control in response to receiving a touch operation based on the first display control, wherein the first display control is any one of at least one other display control;
the display control of the group is not overlapped with the display area of the sub-interface, and the display area of the sub-interface comprises the display area corresponding to the sub-interface of the group and the display area corresponding to the display sub-interface of the other display control.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the object hinting method.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above object hinting method.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the object prompting method provided in the above-mentioned various alternative implementations.
The technical scheme provided by the application can comprise the following beneficial effects:
by superposing and displaying the object prompt identifiers indicating the attribute information of each virtual object in the team sub-interface on the team display control for triggering the team sub-interface, the team information in the team sub-interface can be displayed on the team display control, interface switching operation required for acquiring the team information in the team sub-interface is reduced, and waste of terminal electric quantity and processing resources caused by frequent interface switching operation is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an object hinting system provided by an exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating object hinting in the related art provided by an exemplary embodiment of the present application;
FIG. 4 illustrates a flow chart of an object hinting method provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating an object hinting method provided by an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram of a target scenario interface shown in an exemplary embodiment of the present application;
FIG. 7 illustrates a schematic diagram of a target scenario interface shown in an exemplary embodiment of the present application;
FIG. 8 is a diagram illustrating object hint identification shown in an exemplary embodiment of the present application;
FIG. 9 is a diagram illustrating object hint identification shown in an exemplary embodiment of the present application;
FIG. 10 illustrates a schematic diagram of an identification display location shown in an exemplary embodiment of the present application;
fig. 11 is a flowchart illustrating a correspondence relationship between a display position of positioning attribute information and an identification display position according to an exemplary embodiment of the present application;
fig. 12 is a flowchart illustrating a correspondence relationship between a display position of positioning attribute information and an identification display position according to an exemplary embodiment of the present application;
FIG. 13 illustrates a schematic diagram of a target scenario interface shown in an exemplary embodiment of the present application;
FIG. 14 illustrates a fill state determination flow diagram for identifying display locations as illustrated in an exemplary embodiment of the present application;
FIG. 15 is a flowchart illustrating an object hinting method provided by an exemplary embodiment of the present application;
FIG. 16 is a block diagram illustrating an object hinting apparatus according to an exemplary embodiment of the present application;
FIG. 17 is a block diagram illustrating an object hinting apparatus according to an exemplary embodiment of the present application;
FIG. 18 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 19 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The embodiment of the application provides an object prompting method, which can improve the effect of object prompting by using a team display control. For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene has virtual resources available for at least two virtual characters. Optionally, the virtual scene includes that the virtual world includes a square map, the square map includes a symmetric lower left corner region and an upper right corner region, virtual characters belonging to two enemy camps occupy one of the regions respectively, and a target building/site/base/crystal deep in the other region is destroyed to serve as a winning target.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and an animation character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object may be a three-dimensional stereo model. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, and the virtual character realizes different external images by wearing different skins. In some implementations, the virtual role can also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server cluster 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual scene, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client may be any one of a MMORPG Game (Massive Multiplayer Online role Playing Game), a MOBA (Multiplayer Online Battle Arena) Game, a large-fleeing shooting Game, and an SLG Game (Simulation Game). In the present embodiment, the client is an MMORPG game for example. The first terminal 110 is a terminal used by the first user 101, and the first user 101 uses the first terminal 110 to control a first virtual character located in the virtual scene to perform an activity, where the first virtual character may be referred to as a master virtual character of the first user 101. The activities of the first avatar include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual scene, and the client 131 may be a multi-player online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on the screen of the second terminal 130. The client may be any one of a MMORPG game, a MOBA game, a large fleeing and killing shooting game, and an SLG game, and in this embodiment, the client is the MMORPG game for example. The second terminal 130 is a terminal used by the second user 102, and the second user 102 uses the second terminal 130 to control a second virtual character located in the virtual scene for activity, where the second virtual character may be referred to as a master virtual character of the second user 102. Illustratively, the second avatar is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual scene. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, a notebook computer, a laptop computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server cluster 120 in different embodiments. Optionally, one or more terminals 140 are terminals corresponding to the developer, a development and editing platform for a client of the virtual scene is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server cluster 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the client installation package from the server cluster 120 to update the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server cluster 120 through a wireless network or a wired network.
The server cluster 120 includes at least one of an independent physical server, a plurality of independent physical servers, a cloud server providing cloud computing services, a cloud computing platform, and a virtualization center. The server cluster 120 is used for providing background services for clients supporting three-dimensional virtual scenes. Optionally, the server cluster 120 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 120 undertakes the secondary computing work, and the terminal undertakes the primary computing work; alternatively, the server cluster 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, server cluster 120 includes server 121 and server 126, where server 121 includes processor 122, user account database 123, combat service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data. Optionally, an intelligent signal module 127 is disposed in the server 126, and the intelligent signal module 127 is used to implement the object prompting method provided in the following embodiments.
Fig. 2 is a schematic diagram illustrating an object hinting system according to an exemplary embodiment of the present application, where as shown in fig. 2, the object hinting system 20 includes a sending end 21 and a receiving end 22. The transmitting end may include an obtaining module 211 and a transmitting module 212. The obtaining module 211 is configured to obtain the team data of the first virtual object corresponding to the first terminal, and the sending module 212 is configured to establish a data synchronization transmission channel with the receiving end 22. The receiving end 22 includes a receiving module 221 and an analyzing module 222. The receiving module 221 is configured to establish a data synchronization transmission channel with the sending end 21, and the analyzing module 222 is configured to analyze the queue data of the first virtual object obtained from the sending end, so as to synchronously display the queue information of the first virtual object.
In a virtual scene that needs to be assembled, wins need to be obtained through cooperation between teammates, and different players need to respectively perform operations of joining in the teams in the assembling process, so that it often takes a certain time to assemble a team, and during this time, a user can cut out a team sub-interface and enter other sub-interfaces to confirm other information or perform other operations, so that the team assembling situation in the team sub-interface cannot be accurately obtained. As shown in fig. 3, the schematic diagram includes a team display control 310 and a task display control 320, where different controls correspond to different display sub-interfaces, the number of people of the current team is 1, and the upper limit of the number of people of the team is 8, which indicates that only one team member, i.e., a virtual object controlled by a user, exists in the current team, so as to achieve the purpose of prompting the team formation situation of the user. However, in the above team display control, only the number of team members in the team is shown, and it is not possible to provide more team information in the team sub-interface, and if a user wants to obtain more team information, the user needs to re-enter the team sub-interface to obtain the information, so that the effect of object prompt by using the team display control is poor, and further the user needs to switch back and forth in different sub-interfaces in the team forming process to determine the team information, which results in repeated rendering of the interface, and thus waste of terminal resources is caused.
Fig. 4 is a flowchart illustrating an object prompting method according to an exemplary embodiment of the present application. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 4, the object prompting method includes the following steps:
step 410, displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team.
In a possible implementation manner, the display area of the team display control is different from that of the team sub-interface, that is, when the team sub-interface is displayed in an overlaid manner on the target scene interface, the team display control is not covered by the team sub-interface, and a user can observe the team sub-interface and the team display control in the target scene interface at the same time.
In a possible implementation manner, the attribute information of each virtual object displayed in the team sub-interface is real-time attribute information of each virtual object, and the first terminal displays the attribute information of the virtual object corresponding to each terminal in the same team in the team sub-interface based on the acquired attribute information of the virtual object in the first terminal and the attribute information of the virtual object in the second terminal received from the server.
The process of the first terminal acquiring the attribute information of the virtual object in the second terminal is realized as follows: the second terminal acquires the attribute information of the corresponding virtual object in real time and sends the attribute information to the first terminal through the server, and after the first terminal receives the attribute information of the virtual object corresponding to the second terminal, the attribute information of the virtual object corresponding to the second terminal is updated in real time.
Step 420, generating an object prompt identifier corresponding to each virtual object based on the attribute information of each virtual object.
In the virtual scene, the attribute information of different virtual objects is different, and the terminal can correspondingly generate different object prompt identifiers based on the attribute information of different virtual objects so as to distinguish the virtual objects. The same virtual object may have a plurality of attribute information, and an object hint identifier corresponding to the virtual object is used to represent at least one attribute information of the virtual object.
And 430, overlapping and displaying the object prompt identification corresponding to each virtual object on the team display control.
In a possible implementation manner, an object prompt identifier display area is preset on the team display control, and the object prompt identifier is limited to be displayed in the object prompt identifier display area, so that the object prompt identifier can prompt the attribute information of the virtual object of the team sub-interface content on the premise of not influencing the display of the external information of the team display control.
To sum up, according to the object prompt method provided by the embodiment of the present application, the object prompt identifier indicating the attribute information of each virtual object in the team sub-interface is displayed in a superimposed manner on the team display control for triggering the team sub-interface, so that the team information in the team sub-interface can be displayed on the team display control, interface switching operations required for acquiring the team information in the team sub-interface are reduced, and waste of terminal electric quantity and processing resources due to frequent interface switching operations is reduced.
Fig. 5 is a flowchart illustrating an object prompting method according to an exemplary embodiment of the present application. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 5, the object prompting method includes the following steps:
step 510, displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team.
In one possible implementation, the attribute information of the virtual object includes: at least one of an object type of the corresponding virtual object, a responsibility type of the corresponding virtual object in the target team, a specified type attribute value of the corresponding virtual object, and a gain state of the corresponding virtual object.
The object type of the virtual object can be the game occupation of the virtual object set by a developer in the game application, such as a warrior, a legal person, a shooter, a thief or a stabber, treatment or assistance and the like in the game application, wherein the warrior generally takes responsibility for absorbing injuries in teams; the law master is mainly responsible for remote output in the team; the shooter is mainly responsible for remote physical attack in the team and has high shooting strength and high-strength injury; the thieves or the stabs are generally responsible for the explosive physical output of the close combat in the team, the single output is high, the attack probability is high, and the single output is the core output of the team; the treatment or the assistance is the logistics assistance of one team, is responsible for adding buff to teammates, also has the skills of solution control and control, and also has the function of blood return. The naming of the game profession may vary from game application to game application, and the present application does not limit the naming of the game profession, i.e., the object types of the virtual objects. The type of responsibility of the virtual object in the target team is divided based on the role played by the virtual object in the target team, and may include output, tank, assistance, and the like. The specified type attribute values of the virtual object may include a life value, a physical strength value, a defense value, an attack value, etc. of the virtual object. The gain state of the virtual object means a state in which a buff or a debuff applied to the game character causes the attribute value or the capability value of the designated type of the game character to continuously increase or decrease, and the gain state generally decreases with the passage of time.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control;
the group display control is not overlapped with other display controls, the group display control is not overlapped with the sub-interface display area, and the sub-interface display area comprises a display area corresponding to the group sub-interface and display areas corresponding to the other display controls for displaying the sub-interfaces.
In a possible implementation manner, a display area corresponding to the team display control is partially overlapped with display areas corresponding to other display controls, and when display sub-interfaces corresponding to other display controls are displayed, the display of the fire team sub-interface is shielded, so that the acquisition of team information in the team sub-interface by a user is influenced.
Or, in a possible implementation, the display area corresponding to the team display control completely overlaps with the display areas corresponding to the other display controls. For example, in order to save the interface display space, the task display control shown in fig. 3, the display sub-interfaces corresponding to other display controls and the team sub-interfaces corresponding to the team display control share the same sub-interface display area, that is, the terminal displays the display sub-interfaces corresponding to different controls in the same sub-interface display area, and when displaying, the team display control is not overlapped with other display controls, and the team display control is not overlapped with the sub-interface display area, so as to ensure that when displaying the display sub-interfaces in the sub-interface display area, the team display control is always displayed in the field of view of the user. Fig. 6 is a schematic diagram of an object scene interface according to an exemplary embodiment of the present application, and as shown in fig. 6, the object scene interface includes a task display control 610 and a team display control 620, where the two controls correspond to a same sub-interface display area 630, and when the team display control 620 is in a selected state, a team sub-interface is displayed in the sub-interface display area; fig. 7 is a schematic diagram illustrating a target scene interface according to an exemplary embodiment of the present application, and as shown in fig. 7, when the task display control 610 is in a selected state, a task display sub-interface is displayed in a sub-interface display area. And in the process of changing the display sub-interface displayed in the sub-interface display area, the team display control is always in the visual field range of the user.
Step 520, determining the display attribute of the object prompt identifier corresponding to each virtual object according to the mapping relationship between the attribute information and the display attribute of the object prompt identifier.
In one possible implementation, the display attribute of the object hint identifier includes: at least one of a color of the object cue marker, a shape of the object cue marker, a pattern contained in the object cue marker, and a size of the object cue marker.
In one possible implementation manner, based on the setting of the relevant person, the display attributes of different object prompt identifiers may be set corresponding to different attribute information of the virtual object. For example, taking the attribute information of the virtual object as the corresponding responsibility type of the virtual object in the target team, i.e. tank, output and assistance, and the display attribute of the object prompt identifier as the color of the object prompt identifier as an example, please refer to table 1, which shows the mapping relationship between the attribute information of the virtual object and the display attribute of the object prompt identifier:
TABLE 1
Attribute information of virtual objects Display attributes for object prompt identifiers
(Tank) Blue color
Assistance of Green colour
Output of Red colour
As can be seen from table 1, when the attribute information of a virtual object is a tank, the display attribute of the corresponding object prompt identifier is blue; when the attribute information of the virtual object is auxiliary, the display attribute of the corresponding object prompt identifier is green; and when the attribute information of the virtual object is output, the display attribute of the corresponding object prompt identifier is red.
The display attributes of the object prompt marks can be combined at will and matched for use; taking the display attribute of the object prompt identifier as the color and the shape of the object prompt identifier as an example, when the attribute information of the virtual object is a tank, the display attribute of the corresponding object prompt identifier is a blue rectangle; when the attribute information of the virtual object is auxiliary, the display attribute of the corresponding object prompt identifier is a green triangle; when the attribute information of the virtual object is output, the display attribute of the corresponding object prompt identifier is a red circle.
In one possible implementation manner, two or more types of the attribute information of the virtual object may be combined to possibly establish a corresponding relationship with the display attribute of the object prompt identifier, for example, a mapping relationship between a pattern included in the object prompt identifier and a size of the object prompt identifier is established based on an object type of the virtual object and a life value of the virtual object, and the like, the pattern included in the object prompt identifier corresponds to the object type of the virtual object, and the size of the object prompt identifier corresponds to the life value of the virtual object.
It should be noted that the above description of the mapping relationship between the attribute information of the virtual object and the display attribute of the object presentation identifier is only an example, and the application does not limit the mapping relationship between the two, and other possible applications are not listed in the application.
Step 530, generating object prompt identifiers corresponding to the virtual objects based on the display attributes of the object prompt identifiers corresponding to the virtual objects.
By taking an example of generating an object prompt identifier based on a mapping relationship between a role type of a virtual object in a target team and a pattern contained in the object prompt identifier as an example, fig. 8 illustrates a schematic diagram of the object prompt identifier according to an exemplary embodiment of the present disclosure, and in one possible implementation manner, as illustrated in fig. 8, attribute information of each virtual object in a team sub-interface is ranked according to its role type in the target team so as to facilitate a user to quickly view the role type of the virtual object, and the attribute information of the virtual object serving as a tank role is displayed in a first row, namely an area 810; the attribute information of the virtual object that serves as the auxiliary role is displayed in the second row, i.e., area 820; attribute information of virtual objects serving as output duties is displayed in the third row and the fourth row, that is, the area 830, and object prompt identifiers including corresponding patterns are generated in accordance with the types of duties of the respective virtual objects, wherein the object prompt identifier 840 including the first pattern corresponds to a tank duty, the object prompt identifier 850 including the second pattern corresponds to an auxiliary duty, and the object prompt identifier 860 including the third pattern corresponds to an output duty.
In a possible implementation manner, the attribute information of each virtual object in the team sub-interface may also be ranked according to the order in which the virtual object joins the target team, without limiting the display position of the attribute information of each virtual object in the team sub-interface.
It should be noted that the mapping relationship between the role type of the virtual object and the pattern included in the object prompt identifier is only illustrative, and the pattern type in the object prompt identifier shown in fig. 8 is also illustrative, and the present application does not limit this.
Taking an example of generating an object prompt identifier based on the object type of a virtual object and the mapping relationship between the life value of the virtual object and the pattern contained in the object prompt identifier and the size of the object prompt identifier, fig. 9 shows a schematic diagram of the object prompt identifier according to an exemplary embodiment of the present application, as shown in fig. 9, a team sub-interface displays the responsibility type of each virtual object in a target team and the assigned type attribute value of each virtual object, the assigned type attribute value in fig. 9 is the life value (Health Point, HP) of the virtual object, the pattern contained in the object prompt identifier is determined according to the responsibility type of each virtual object in the team, the size of the object prompt identifier is determined according to the life value of each virtual object, taking the virtual object 1 in the team sub-interface as an example, the responsibility type of the virtual object 1 in the team is a tank, the life value of the virtual object 1 is half of the highest life value, the pattern included in the corresponding object prompt identifier 910 is the first pattern, and the size of the object prompt identifier is half of the maximum size of the preset object prompt identifier, so as to represent the responsibility type and the current life value of the virtual object a.
In a possible case, when the display attribute of the object prompt identifier is the color of the object prompt identifier, the specified type attribute value of the virtual object or the gain state of the virtual object may be represented by changing the saturation of the color of the object prompt identifier, and taking the specified type attribute value of the virtual object as an example, the higher the specified type attribute value of the virtual object is, the higher the color saturation of the object prompt identifier is, and the lower the specified type attribute value of the virtual object is, the lower the color saturation of the object prompt identifier is.
And 540, overlapping and displaying the object prompt identification corresponding to each virtual object on the team display control.
In a possible implementation manner, an identifier display position is marked on the group display control, and the identifier display position corresponds to a display position of attribute information of each virtual object included in the group sub-interface one by one.
That is, the team display control is marked with an identification display position in a preset form, and the identification display position is generated by the terminal based on the display position of the attribute information of each virtual object in the team sub-interface, for example, in the team sub-interface, the role type of the virtual object displayed in the first row is a tank, and then the role type of the virtual object displayed in the first row of the identification display position is also a tank.
In this case, the object prompt identifier corresponding to each virtual object is displayed in an overlapping manner on the team display control, and the method includes:
acquiring a first display position, wherein the first display position is the display position of the attribute information of the first target virtual object in the team sub-interface; the first target virtual object is any one of the respective virtual objects;
acquiring a second display position based on the first display position, wherein the second display position is an identification display position corresponding to the first display position;
and displaying the object prompt identification of the first target virtual object on the second display position.
Fig. 10 is a schematic diagram illustrating an identifier display position according to an exemplary embodiment of the present application, and as shown in fig. 10, when an object prompt identifier is not displayed at the identifier display position, for example, the identifier display position 1010 is displayed as an unfilled rectangular outline to indicate that the attribute information of the virtual object does not exist at the display position of the attribute information of the virtual object in the team sub-interface, that is, the virtual object to which the corresponding attribute information is not added in the target team. When a virtual object is added to the target team, that is, the attribute information of the virtual object is displayed in the team sub-interface, for example, an object prompt identifier is displayed at the identifier display position 1020, which is indicated as filled, and an object prompt identifier corresponding to the display attribute is displayed.
In a possible implementation manner, the team sub-interface includes n display positions of the attribute information, and the display position of each attribute information has a corresponding first type position number; n identification display positions are marked on the team display control, each identification display position is provided with a corresponding second type position number, the first type position numbers and the second type position numbers are in one-to-one correspondence, and n is a positive integer;
based on the first display position, acquiring a second display position, comprising:
acquiring a first type position number of a first display position;
and acquiring a second display position based on the first type position number of the first display position, wherein the second type position number of the second display position corresponds to the first type position number of the first display position.
In order to accurately obtain the corresponding relationship between the display position of the attribute information of the virtual object in the team sub-interface and the identification display position, the display position and the identification display position are correspondingly numbered one by one, in a possible implementation manner, a difference between a first type position number and a corresponding second type position number is a designated numerical value, for example, the first type position number is 1, and the second type position number corresponding to the first type position number is 11; or, in another possible implementation manner, the first-type position number is the same as the second-type position number, that is, the first-type position number is 1, and the second-type position number corresponding to the first-type position number is also 1. Taking the first type position number as an example and the corresponding second type position number as an example for explanation, please refer to fig. 11, which shows a flowchart of the correspondence between the display position of the positioning attribute information and the identifier display position shown in an exemplary embodiment of the present application, and as shown in fig. 11, the process is implemented as:
s1101, the display position of the attribute information of the virtual object is set at a fixed position of the team sub-interface in a restructuring mode.
And S1102, reconstructing the mark display position of the corresponding attribute information to be arranged at the fixed position of the team display control.
S1103, the program numbers 1, 2, 3 … n the display positions of the attribute information.
S1104, the program assigns the label display position the corresponding number 1, 2, 3 … n.
That is, the program numbers the display positions of the attribute information in the order from the left to the right and from the top to the bottom, and also numbers the display positions of the mark in the same order of numbering.
The reconstructed canvas for making a User Interface (UI) is used, when the UI canvas is made, the display positions of the virtual objects and the identification display positions are in one-to-one correspondence and named, and when the program is executed, the display positions are in correspondence and are numbered in groups. That is, when the UI canvas is created, the attribute information of the virtual object corresponding to each position is already determined, and the program only needs to perform the corresponding numbering.
Or, in another possible implementation manner, please refer to fig. 12, which shows a flowchart of a correspondence between a display position of positioning attribute information and an identifier display position, shown in an exemplary embodiment of the present application, and as shown in fig. 12, the process is implemented as:
and S1201, automatically setting the attribute information corresponding to the display position of each attribute information in the team sub-interface by the program.
And S1202, marking the identifier display position corresponding to the attribute information on the team display control correspondingly by the program.
S1203, the program associates the display position of the attribute information with the identification display position by number 1, 2, 3 … n.
That is, when the canvas is created, the attribute information corresponding to each display position is not set, but the program sets the attribute information corresponding to each display position, sets the attribute information corresponding to each identifier display position, and performs the corresponding group numbering so that the display positions of the attribute information and the identifier display positions are in one-to-one correspondence.
In another possible implementation manner, an identification display position is marked on the team display control, and the identification display position is fixedly set based on the attribute information of the virtual object.
That is to say, the attribute information of the virtual object corresponding to the display position of the identifier on the team display control is fixedly set, for example, the role type of the virtual object corresponding to the display position 1 of the identifier is tank, the role of the virtual object corresponding to the display position 2 of the identifier is auxiliary, the role of the virtual object corresponding to the display position 3 of the identifier is output, and the attribute information is fixed and does not change with the change of the display position of the attribute information of the virtual object in the team sub-interface.
In this case, the object prompt identifier corresponding to each virtual object is displayed in an overlapping manner on the team display control, and the method includes:
acquiring an identifier display position corresponding to an object prompt identifier of the second target virtual object as a third display position according to the attribute information of the second target virtual object; the second target virtual object is any one of the virtual objects, and the target object prompt identifier is an object prompt identifier generated based on the attribute information of the second target virtual object;
and displaying the object prompt identification of the second target virtual object at the third display position.
Fig. 13 illustrates a schematic diagram of a target scene interface shown in an exemplary embodiment of the present application, as shown in fig. 13, the target scene interface includes a team display control 1310 and a team sub-interface 1320 displayed in a superimposed manner, where attribute information of a virtual object corresponding to each identifier display position in the team display control 1310 is fixed and is not limited in the display position of the attribute information of the virtual object in the team sub-interface, in fig. 13, an identifier display position 1311 corresponds to a virtual object 1, an identifier display position 1312 corresponds to a virtual object 2, and an identifier display position 1313 corresponds to a virtual object 3, that is, no matter how the attribute information of the virtual object is displayed in the team sub-interface, the identifier display position of the team display control is displayed according to an identifier display position corresponding to preset attribute information.
In one possible implementation, the method further includes:
in response to the first target virtual object leaving the target team, the object hint identification of the first target virtual object is removed from the team display control.
In a possible implementation manner, the first target virtual object leaving the target team may be the first target virtual object leaving the target team, or the life value of the first target virtual object is 0, the first target virtual object is determined to be eliminated, and in the above case, the object prompt identifier of the first target virtual object is removed from the team display control.
In a possible implementation manner, the terminal records the occupancy state of the display position of the attribute information of each virtual object in the team display sub-interface by establishing the container, and prompts the mark to be added or removed on the team display control according to the recorded occupancy state object. Fig. 14 shows a filling state determination flowchart for identifying a display location according to an exemplary embodiment of the present application, where as shown in fig. 14, the process includes:
s1401, the first target virtual object joins the team, and acquires attribute information of the first virtual object.
S1402, displaying the attribute information of the first virtual object in the team sub-interface.
S1403, mark the display position of the corresponding attribute information in the container as a placeholder state 1.
S1404, the first target virtual object leaves the team.
S1405, marking the display position occupancy state of the corresponding attribute information in the container as 0.
When the occupation state of the display position of the attribute information is 1, confirming that the corresponding identifier display position is a filling state, and displaying a corresponding object prompt identifier; and when the occupation state of the display position of the attribute information is 0, confirming that the corresponding mark display position is in an unfilled state.
In a possible implementation manner, in response to that the life value of the first target virtual object is 0, the object prompt identifier corresponding to the first target virtual object is retained after performing a designation process on the first target virtual object, for example, the designation process is to set the object prompt identifier as a gray-scale map, or a preset graph is added to the object prompt identifier to indicate that the first target virtual object is eliminated.
To sum up, according to the object prompt method provided by the embodiment of the present application, the object prompt identifier indicating the attribute information of each virtual object in the team sub-interface is displayed in a superimposed manner on the team display control for triggering the team sub-interface, so that the team information in the team sub-interface can be displayed on the team display control, interface switching operations required for acquiring the team information in the team sub-interface are reduced, and waste of terminal electric quantity and processing resources due to frequent interface switching operations is reduced.
Fig. 15 is a flowchart illustrating an object prompting method according to an exemplary embodiment of the present application. The object prompting method may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 15, the object prompting method includes the following steps:
1510, displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team.
Step 1520, in response to the target virtual object existing in the target team, overlaying and displaying an object prompt identifier corresponding to the target virtual object on the team display control, where the object prompt identifier is generated based on the attribute information of the target virtual object.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control;
displaying a team sub-interface in a display area corresponding to a team display control in response to receiving a touch operation based on the team display control;
displaying a display sub-interface corresponding to a first display control in a display area corresponding to the first display control in response to receiving a touch operation based on the first display control, wherein the first display control is any one of at least one other display control;
the display control of the group is not overlapped with the display area of the sub-interface, and the display area of the sub-interface comprises the display area corresponding to the group sub-interface and the display areas corresponding to the other display controls for displaying the sub-interfaces.
To sum up, according to the object prompt method provided by the embodiment of the application, the object prompt identifier indicating the attribute information of each virtual object in the team sub-interface is displayed in a superimposed manner on the team display control for triggering the team sub-interface, so that more team information in the team sub-interface can be displayed on the team display control, interface switching operations required for acquiring the team information in the team sub-interface are reduced, and waste of terminal electric quantity and processing resources caused by frequent interface switching operations is reduced.
Fig. 16 is a block diagram illustrating an object prompting device according to an exemplary embodiment of the present application. The object prompting device can be applied to a terminal, wherein the terminal can be the terminal shown in fig. 1. As shown in fig. 16, the object presentation apparatus includes:
an interface display module 1610, configured to display a target scene interface, where the target scene interface includes a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team;
an identifier generating module 1620, configured to generate an object prompt identifier corresponding to each virtual object based on the attribute information of each virtual object;
and an identifier display module 1630, configured to display, in an overlapping manner, the object prompt identifier corresponding to each virtual object on the team display control.
In one possible implementation, the identification generation module 1620 includes:
the display attribute determining submodule is used for determining the display attributes of the object prompt identifiers corresponding to the virtual objects according to the mapping relation between the attribute information and the display attributes of the object prompt identifiers;
and the identifier generation submodule is used for generating the object prompt identifier corresponding to each virtual object based on the display attribute of the object prompt identifier corresponding to each virtual object.
In one possible implementation manner, the display attribute of the object hint identifier includes: at least one of a color of the object cue marker, a shape of the object cue marker, a pattern contained in the object cue marker, and a size of the object cue marker.
In one possible implementation, the attribute information includes: at least one of an object type of the corresponding virtual object, a responsibility type of the corresponding virtual object in the target team, a specified type attribute value of the corresponding virtual object, and a gain state of the corresponding virtual object.
In a possible implementation manner, the group display control is marked with an identifier display position, and the identifier display position corresponds to the display position of the attribute information of each virtual object contained in the group sub-interface one by one;
the identification display module 1630 includes:
the first position obtaining sub-module is used for obtaining a first display position, and the first display position is the display position of the attribute information of the first target virtual object in the team sub-interface; the first target virtual object is any one of the respective virtual objects;
a second position obtaining submodule, configured to obtain a second display position based on the first display position, where the second display position is an identifier display position corresponding to the first display position;
and the identification display submodule is used for displaying the object prompt identification of the first target virtual object at the second display position.
In a possible implementation manner, the team sub-interface includes n display positions of the attribute information, and the display position of each attribute information has a corresponding first type position number; n identification display positions are marked on the team display control, each identification display position is provided with a corresponding second type position number, the first type position numbers correspond to the second type position numbers one by one, and n is a positive integer;
the second position acquisition sub-module includes:
a number acquiring unit for acquiring a first type position number of the first display position;
and the second position acquisition unit is used for acquiring a second display position based on the first type position number of the first display position, and the second type position number of the second display position corresponds to the first type position number of the first display position.
In one possible implementation, the apparatus further includes:
and the identification removing module is used for responding to the first target virtual object leaving the target team and removing the object prompt identification of the first target virtual object from the team display control.
In a possible implementation manner, an identifier display position is marked on the team display control, and the identifier display position is fixedly set based on the attribute information of the virtual object;
the identification display module 1630 includes:
the third position obtaining submodule is used for obtaining an identifier display position corresponding to the object prompt identifier of the second target virtual object as a third display position according to the attribute information of the second target virtual object; the second target virtual object is any one of the virtual objects, and the target object prompt identifier is an object prompt identifier generated based on the attribute information of the second target virtual object;
the identifier display submodule is configured to display an object prompt identifier of the second target virtual object at a third display position.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control;
the group display control is not overlapped with other display controls, the group display control is not overlapped with the sub-interface display area, and the sub-interface display area comprises a display area corresponding to the group sub-interface and display areas corresponding to the other display controls for displaying the sub-interfaces.
To sum up, the object prompt device provided by the embodiment of the application is applied to a terminal, and the object prompt identifiers indicating the attribute information of each virtual object in the team sub-interface are displayed in a superimposed manner on the team display control for triggering the team sub-interface, so that the team information in the team sub-interface can be displayed on the team display control, interface switching operation required for acquiring the team information in the team sub-interface is reduced, and waste of terminal electric quantity and processing resources caused by frequent interface switching operation is reduced.
Fig. 17 is a block diagram illustrating an object prompting device according to an exemplary embodiment of the present application. The object prompting device can be applied to a terminal, wherein the terminal can be the terminal shown in fig. 1. As shown in fig. 17, the object presentation apparatus includes:
an interface display module 1710, configured to display a target scene interface, where the target scene interface includes a team display control; the team display control is used for triggering the stack and display team sub-interfaces on the target scene interface; the team sub-interface is used for displaying the attribute information of each virtual object in the target team;
the identifier display module 1720 is configured to, in response to that the target virtual object exists in the target team, superimpose and display an object prompt identifier corresponding to the target virtual object on the team display control, where the object prompt identifier is generated based on the attribute information of the target virtual object.
In a possible implementation manner, the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control; the device also includes:
the first display module is used for responding to the received touch operation based on the team display control and displaying the team sub-interface in the display area corresponding to the team display control;
the second display module is used for displaying a display sub-interface corresponding to the first display control in a display area corresponding to the first display control in response to receiving the touch operation based on the first display control, wherein the first display control is any one of at least one other display control;
the display control of the group is not overlapped with the display area of the sub-interface, and the display area of the sub-interface comprises the display area corresponding to the group sub-interface and the display areas corresponding to the other display controls for displaying the sub-interfaces.
To sum up, the object prompt device provided by the embodiment of the application is applied to a terminal, and the object prompt identifiers indicating the attribute information of each virtual object in the team sub-interface are displayed in a superimposed manner on the team display control for triggering the team sub-interface, so that the team information in the team sub-interface can be displayed on the team display control, interface switching operation required for acquiring the team information in the team sub-interface is reduced, and waste of terminal electric quantity and processing resources caused by frequent interface switching operation is reduced.
Fig. 18 is a block diagram illustrating the structure of a computer device 1800, according to an example embodiment. The computer device may be implemented as a server in the above-mentioned aspects of the present application.
The computer device 1800 includes a Central Processing Unit (CPU) 1801, a system Memory 1804 including a Random Access Memory (RAM) 1802 and a Read-Only Memory (ROM) 1803, and a system bus 1805 connecting the system Memory 1804 and the CPU 1801. The computer device 1800 also includes a basic Input/Output system (I/O system) 1806, as well as a mass storage device 1807 for storing an operating system 1813, application programs 1814, and other program modules 1815, to facilitate the transfer of information between the various devices within the computer.
The basic input/output system 1806 includes a display 1808 for displaying information and an input device 1809 such as a mouse, keyboard, etc. for user input of information. Wherein the display 1808 and the input device 1809 are coupled to the central processing unit 1801 via an input/output controller 1810 coupled to the system bus 1805. The basic input/output system 1806 may also include an input/output controller 1810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1807 is connected to the central processing unit 1801 through a mass storage controller (not shown) connected to the system bus 1805. The mass storage device 1807 and its associated computer-readable media provide non-volatile storage for the computer device 1800. That is, the mass storage device 1807 may include a computer-readable medium (not shown) such as a hard disk or Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1804 and mass storage device 1807 described above may be collectively referred to as memory.
The computer device 1800 may also operate in accordance with various embodiments of the present disclosure by connecting to remote computers over a network, such as the internet. That is, the computer device 1800 may be connected to the network 1812 through the network interface unit 1811 that is coupled to the system bus 1805, or the network interface unit 1811 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes at least one instruction, at least one program, a code set, or a set of instructions, which is stored in the memory, and the central processing unit 1801 implements all or part of the steps performed by the server in the object prompting method according to the embodiments by executing the at least one instruction, the at least one program, the code set, or the set of instructions.
FIG. 19 is a block diagram illustrating the architecture of a computer device 1900 according to an example embodiment. The computer device 1900 may be a user terminal such as a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop computer, or a desktop computer. Computer device 1900 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1902 is used to store at least one instruction for execution by processor 1901 to implement an object hinting method in a virtual scene as provided by method embodiments herein.
In some embodiments, computer device 1900 may also optionally include: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a display screen 1905, a camera assembly 1906, an audio circuit 1907, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, providing the front panel of computer device 1900; in other embodiments, display 1905 may be at least two, each disposed on a different surface of computer device 1900 or in a folded design; in still other embodiments, display 1905 may be a flexible display disposed on a curved surface or on a folding surface of computer device 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing, or inputting the electric signals into the radio frequency circuit 1904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
Power supply 1909 is used to provide power to the various components in computer device 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, optical sensor 1915, and proximity sensor 1916.
The acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may cooperate with the acceleration sensor 1911 to acquire a 3D motion of the user with respect to the computer device 1900. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is disposed on the side frame of the computer device 1900, the user can detect a holding signal of the computer device 1900, and the processor 1901 can perform right-left hand recognition or quick operation based on the holding signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front panel of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the touch display 1905 is controlled by the processor 1901 to switch from a bright screen state to a dark screen state when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually becomes larger, the touch display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 19 is not intended to be limiting of computer device 1900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 4, 5, or 15 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any one of the embodiments of fig. 4, fig. 5 or fig. 15.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (16)

1. An object prompting method, characterized in that the method comprises:
displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
generating object prompt identifiers corresponding to the virtual objects based on the attribute information of the virtual objects;
the object prompt identification corresponding to each virtual object is displayed on the team display control in an overlapping mode; at least one identification display position is marked on the team display control; at least one identification display position corresponds to the display position of the attribute information of each virtual object contained in the team sub-interface one by one; the mark display position is displayed as a non-filled contour, or the mark display position is displayed as a filled contour filled by the object prompt mark; the unfilled contour is used for indicating a virtual object which is not added with corresponding attribute information in the target team; the filled contour is used for indicating the virtual object added with the corresponding attribute information in the target team.
2. The method according to claim 1, wherein the generating an object hint identifier corresponding to each virtual object based on the attribute information of each virtual object includes:
determining the display attribute of the object prompt identifier corresponding to each virtual object according to the mapping relation between the attribute information and the display attribute of the object prompt identifier;
and generating the object prompt identification corresponding to each virtual object based on the display attribute of the object prompt identification corresponding to each virtual object.
3. The method of claim 2, wherein the display attributes of the object hint identifier comprise: at least one of a color of the object prompt identification, a shape of the object prompt identification, a pattern contained in the object prompt identification, and a size of the object prompt identification.
4. The method of claim 1, wherein the attribute information comprises: at least one of an object type of the corresponding virtual object, a responsibility type of the corresponding virtual object in the target team, a specified type attribute value of the corresponding virtual object, and a gain state of the corresponding virtual object.
5. The method according to claim 1, wherein the displaying the object hint identifier corresponding to each virtual object in an overlaid manner on the team display control comprises:
acquiring a first display position, wherein the first display position is the display position of the attribute information of the first target virtual object in the team sub-interface; the first target virtual object is any one of the respective virtual objects;
acquiring a second display position based on the first display position, wherein the second display position is the identifier display position corresponding to the first display position;
displaying the object hint identifier of the first target virtual object at the second display location.
6. The method according to claim 5, wherein the team sub-interface comprises n display positions of the attribute information, and each display position of the attribute information has a corresponding first type position number; the team display control is marked with n identification display positions, each identification display position is provided with a corresponding second type position number, the first type position numbers and the second type position numbers are in one-to-one correspondence, and n is a positive integer;
the obtaining a second display position based on the first display position comprises:
acquiring a first type position number of the first display position;
and acquiring the second display position based on the first type position number of the first display position, wherein the second type position number of the second display position corresponds to the first type position number of the first display position.
7. The method according to any one of claims 5 or 6, further comprising:
in response to the first target virtual object leaving the target team, removing the object hint identification of the first target virtual object from the team display control.
8. The method according to claim 1, wherein identifying a display position is fixedly set based on the attribute information of a virtual object;
the displaying the object prompt identifiers corresponding to the virtual objects on the team display control in an overlapping manner includes:
acquiring the identifier display position corresponding to the object prompt identifier of the second target virtual object as a third display position according to the attribute information of the second target virtual object; the second target virtual object is any one of the virtual objects, and the target object prompt identifier is an object prompt identifier generated based on the attribute information of the second target virtual object;
and displaying the object prompt identifier of the second target virtual object at the third display position.
9. The method according to claim 1, wherein the target scene interface includes at least one other display control except the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control;
the group display control is not overlapped with the other display controls, the group display control is not overlapped with a sub-interface display area, and the sub-interface display area comprises a display area corresponding to the group sub-interface and a display area corresponding to the other display controls and displaying the sub-interface.
10. An object prompting method, characterized in that the method comprises:
displaying a target scene interface, wherein the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
in response to the fact that a target virtual object exists in the target team, displaying an object prompt identification corresponding to the target virtual object in an overlaying mode on the team display control, wherein the object prompt identification is generated based on the attribute information of the target virtual object; at least one identification display position is marked on the team display control; at least one identification display position corresponds to the display position of the attribute information of each virtual object contained in the team sub-interface one by one; the mark display position is displayed as a non-filled contour, or the mark display position is displayed as a filled contour filled by the object prompt mark; the unfilled contour is used for indicating a virtual object which is not added with corresponding attribute information in the target team; the filled contour is used for indicating the virtual object added with the corresponding attribute information in the target team.
11. The method according to claim 10, wherein the target scene interface includes at least one other display control except for the team display control, and a display area of the display sub-interface corresponding to the other display control overlaps with a display area of the team sub-interface corresponding to the team display control; the method further comprises the following steps:
displaying the team sub-interface in a display area corresponding to the team display control in response to receiving touch operation based on the team display control;
displaying a display sub-interface corresponding to a first display control in a display area corresponding to the first display control in response to receiving a touch operation based on the first display control, wherein the first display control is any one of at least one other display control;
the display control of the group is not overlapped with the display area of the sub-interface, and the display area of the sub-interface comprises the display area corresponding to the sub-interface of the group and the display area corresponding to the display sub-interface of the other display control.
12. An object prompting apparatus, characterized in that the apparatus comprises:
the interface display module is used for displaying a target scene interface, and the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
the identification generation module is used for generating object prompt identifications corresponding to the virtual objects based on the attribute information of the virtual objects;
the identifier display module is used for displaying the object prompt identifiers corresponding to the virtual objects on the team display control in an overlapping mode; at least one identification display position is marked on the team display control; at least one identification display position corresponds to the display position of the attribute information of each virtual object contained in the team sub-interface one by one; the mark display position is displayed as a non-filled contour, or the mark display position is displayed as a filled contour filled by the object prompt mark; the unfilled contour is used for indicating a virtual object which is not added with corresponding attribute information in the target team; the filled contour is used for indicating the virtual object added with the corresponding attribute information in the target team.
13. An object prompting apparatus, characterized in that the apparatus comprises:
the interface display module is used for displaying a target scene interface, and the target scene interface comprises a team display control; the team display control is used for triggering the target scene interface to be overlaid and displayed with a team sub-interface; the team sub-interface is used for displaying attribute information of each virtual object in the target team;
the identification display module is used for responding to the existence of a target virtual object in the target team, and displaying an object prompt identification corresponding to the target virtual object in a superimposed manner on the team display control, wherein the object prompt identification is generated based on the attribute information of the target virtual object; at least one identification display position is marked on the team display control; at least one identification display position corresponds to the display position of the attribute information of each virtual object contained in the team sub-interface one by one; the mark display position is displayed as a non-filled contour, or the mark display position is displayed as a filled contour filled by the object prompt mark; the unfilled contour is used for indicating a virtual object which is not added with corresponding attribute information in the target team; the filled contour is used for indicating the virtual object added with the corresponding attribute information in the target team.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the object hinting method of any one of claims 1 to 11.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the object hinting method of any one of claims 1 to 11.
16. A computer program product, characterized in that it comprises at least one computer program which is loaded and executed by a processor to implement the object hinting method according to any one of claims 1 to 11.
CN202011017599.1A 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium Active CN112138390B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210404475.1A CN114949854A (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium
CN202011017599.1A CN112138390B (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011017599.1A CN112138390B (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210404475.1A Division CN114949854A (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112138390A CN112138390A (en) 2020-12-29
CN112138390B true CN112138390B (en) 2022-04-22

Family

ID=73896693

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210404475.1A Pending CN114949854A (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium
CN202011017599.1A Active CN112138390B (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210404475.1A Pending CN114949854A (en) 2020-09-24 2020-09-24 Object prompting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (2) CN114949854A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204320A (en) * 2021-04-30 2021-08-03 北京有竹居网络技术有限公司 Information display method and device
CN114064157B (en) * 2021-11-09 2023-09-15 中国电力科学研究院有限公司 Automatic flow implementation method, system, equipment and medium based on page element identification
CN114860130A (en) * 2022-05-24 2022-08-05 北京新唐思创教育科技有限公司 Interaction method and device in full-reality scene, electronic equipment and storage medium
CN117298603A (en) * 2022-06-23 2023-12-29 腾讯科技(成都)有限公司 Reservation team forming method, device, equipment and storage medium in virtual scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4693936B1 (en) * 2010-06-16 2011-06-01 株式会社バンダイナムコゲームス Computer system and program
CN111443848A (en) * 2020-03-24 2020-07-24 腾讯科技(深圳)有限公司 Information display method and device, storage medium and electronic device
CN111589130A (en) * 2020-04-24 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106924968A (en) * 2015-12-31 2017-07-07 网易(杭州)网络有限公司 Virtual role is formed a team control method and device
JP6624463B2 (en) * 2017-06-09 2019-12-25 株式会社コナミデジタルエンタテインメント Game system and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4693936B1 (en) * 2010-06-16 2011-06-01 株式会社バンダイナムコゲームス Computer system and program
CN111443848A (en) * 2020-03-24 2020-07-24 腾讯科技(深圳)有限公司 Information display method and device, storage medium and electronic device
CN111589130A (en) * 2020-04-24 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium in virtual scene

Also Published As

Publication number Publication date
CN112138390A (en) 2020-12-29
CN114949854A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN112138390B (en) Object prompting method and device, computer equipment and storage medium
CN111462307B (en) Virtual image display method, device, equipment and storage medium of virtual object
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
CN111760278B (en) Skill control display method, device, equipment and medium
CN111589133A (en) Virtual object control method, device, equipment and storage medium
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN112604305B (en) Virtual object control method, device, terminal and storage medium
CN112891931A (en) Virtual role selection method, device, equipment and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN111672099A (en) Information display method, device, equipment and storage medium in virtual scene
CN111672104B (en) Virtual scene display method, device, terminal and storage medium
CN112083848B (en) Method, device and equipment for adjusting position of control in application program and storage medium
CN111603770A (en) Virtual environment picture display method, device, equipment and medium
CN112169330B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112402962A (en) Signal display method, device, equipment and medium based on virtual environment
CN113577765A (en) User interface display method, device, equipment and storage medium
CN113599819B (en) Prompt information display method, device, equipment and storage medium
CN112891939B (en) Contact information display method and device, computer equipment and storage medium
CN112156471B (en) Skill selection method, device, equipment and storage medium of virtual object
CN113457173A (en) Remote teaching method, device, computer equipment and storage medium
CN113181647A (en) Information display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035391

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant