CN111462307B - Virtual image display method, device, equipment and storage medium of virtual object - Google Patents

Virtual image display method, device, equipment and storage medium of virtual object Download PDF

Info

Publication number
CN111462307B
CN111462307B CN202010241840.2A CN202010241840A CN111462307B CN 111462307 B CN111462307 B CN 111462307B CN 202010241840 A CN202010241840 A CN 202010241840A CN 111462307 B CN111462307 B CN 111462307B
Authority
CN
China
Prior art keywords
virtual
virtual object
team
dimensional
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010241840.2A
Other languages
Chinese (zh)
Other versions
CN111462307A (en
Inventor
常议之
由军
徐锦威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010241840.2A priority Critical patent/CN111462307B/en
Publication of CN111462307A publication Critical patent/CN111462307A/en
Application granted granted Critical
Publication of CN111462307B publication Critical patent/CN111462307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for displaying an avatar of a virtual object, and relates to the technical field of virtual scenes. The method comprises the following steps: responding to a team image display instruction, and acquiring posture information used for indicating the respective posture of each virtual object in a target team; generating a three-dimensional avatar of each virtual object based on the pose information; displaying a team image display interface in a first terminal corresponding to the first virtual object, wherein the team image display interface comprises three-dimensional virtual images of all virtual objects; the first virtual object is any virtual object in the target team. By the method, the user can simply and directly acquire the virtual object selected by other users in the same team through the virtual image of the virtual object in the terminal team image display interface in the team forming process, so that the judgment time of the user on the virtual object in the team forming process is reduced, the team forming efficiency is improved, and the interface display effect is improved.

Description

Virtual image display method, device, equipment and storage medium of virtual object
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying an avatar of a virtual object.
Background
In applications supporting virtual scenes, the team function is a common function. Through the team forming function, a plurality of user account numbers can form a team, and virtual objects are respectively controlled to cooperate to perform related activities in a virtual scene.
In the related art, in the process of implementing team formation, virtual objects selected by other users in the same team can be known through a two-dimensional (2D) virtual object head portrait displayed in a terminal display interface.
In the related art, the terminal display interface mainly displays the head portrait of the virtual object through the 2D model, so that the user cannot intuitively know the virtual object selected by other users, the judgment time of the user in the team formation is increased, and the team formation efficiency is low.
Disclosure of Invention
The embodiment of the application provides a display method and device of a virtual object, computer equipment and a storage medium, which can provide an intuitive virtual object, reduce the judgment time of a user in a team forming process and further improve the team forming efficiency. The technical scheme is as follows:
in one aspect, a method for displaying an avatar of a virtual object is provided, the method comprising:
responding to a team image display instruction, and acquiring attitude information, wherein the attitude information is used for indicating the respective attitude of each virtual object in a target team;
generating a three-dimensional avatar of the respective virtual object based on the pose information;
displaying a team image display interface in a first terminal corresponding to a first virtual object, wherein the team image display interface comprises three-dimensional virtual images of all the virtual objects; the first virtual object is any virtual object in the target team.
In one aspect, a method for displaying an avatar of a virtual object is provided, the method comprising:
displaying a team image display interface, wherein the team image display interface comprises three-dimensional virtual images of all virtual objects in a target team;
and in response to the change of the posture information corresponding to the first virtual object, updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the posture information, wherein the posture information is used for indicating the posture of each virtual object in the target team, and any virtual image in the target team is the first virtual object.
In one aspect, there is provided an avatar presentation apparatus of a virtual object, the apparatus including:
the system comprises a posture information acquisition module, a display module and a display module, wherein the posture information acquisition module is used for responding to a team image display instruction and acquiring posture information, and the posture information is used for indicating the respective posture of each virtual object in a target team;
a generating module for generating a three-dimensional avatar of each virtual object based on the pose information;
the display module is used for displaying a team image display interface in a first terminal corresponding to the first virtual object, and the team image display interface comprises three-dimensional virtual images of all the virtual objects; the first virtual object is any virtual object in the target team.
In a possible implementation manner, the display module is configured to display, in the first terminal, the team image display interface including a target scene picture, where the target scene picture is a picture obtained by observing a target scene at a specified viewing angle, and the target scene is a three-dimensional scene provided with three-dimensional avatars of the virtual objects.
In one possible implementation, the apparatus further includes:
the arrangement sequence acquisition module is used for acquiring the arrangement sequence of each virtual object in the target team;
and the position determining module is used for determining the positions of the three-dimensional virtual images of the virtual objects in the target scene respectively based on the arrangement sequence of the virtual objects in the target team.
In one possible implementation, the position determining module includes:
the setting submodule is used for setting a three-dimensional virtual image of the first virtual object to be positioned in the middle position of the target scene;
and the position determining sub-module is used for determining the position of a three-dimensional virtual image of a second virtual object in the target scene according to the relation between the arrangement sequence of the second virtual object in the target team and the arrangement sequence of the second virtual object in the target team, wherein the second virtual object is other virtual objects except the first virtual object in the target team.
In one possible implementation, in response to the team avatar display interface being a team avatar display interface prior to the start of a virtual scene, the apparatus further comprises:
in the team image display interface, a three-dimensional virtual image display posture selection control corresponding to the first virtual object is arranged;
a pose determination module to determine a new pose of the first virtual object in response to operation of the pose selection control;
and the posture updating module is used for updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new posture of the first virtual object.
In one possible implementation, the apparatus further includes:
and the first display module is used for synchronizing the three-dimensional virtual image of the first virtual object with the updated posture to terminals corresponding to other virtual objects in the target team for display.
In one possible implementation, in response to the team avatar display interface being a team avatar display interface prior to the start of a virtual scene, the apparatus further comprises:
displaying an appearance selection control corresponding to the three-dimensional virtual image of the first virtual object in the team image display interface;
an appearance determination module to determine a new appearance of the first virtual object in response to operation of the appearance selection control;
and the appearance updating module is used for updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new appearance of the first virtual object.
In one possible implementation, the apparatus further includes:
and the second display module is used for synchronizing the three-dimensional virtual image of the updated first virtual object to terminals corresponding to other virtual objects in the target team for display.
In one possible implementation, in response to the team character presentation interface being a settlement interface after the virtual scene ends, the apparatus further comprises:
displaying an interactive control corresponding to the three-dimensional virtual image of the second virtual object in the team image display interface; the second virtual object is a virtual object other than the first virtual object in the target team;
and the execution module is used for responding to the triggering operation of the interactive control and executing the interactive operation with the second virtual object.
In one possible implementation, the apparatus further includes:
and the information display module is used for responding to the received interactive operation of the second virtual object and displaying the interactive information corresponding to the interactive operation corresponding to the three-dimensional virtual image of the second virtual object.
In one possible implementation, the apparatus further includes:
the gesture obtaining module is used for responding to the received continuous touch operation of the three-dimensional virtual image based on the first virtual object and obtaining at least two gestures of the first virtual object;
and the continuous updating module is used for sequentially updating the three-dimensional virtual images of the first virtual object corresponding to the at least two postures in the team image display interface according to the appointed posture sequence.
In one aspect, a computer device is provided, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the avatar exhibition method for the virtual object.
In one aspect, a computer-readable storage medium is provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the avatar presentation method of the above-mentioned virtual object.
The technical scheme provided by the application can comprise the following beneficial effects:
in the team forming process of the virtual scene, the three-dimensional virtual images of all the virtual objects are displayed in the team image display interface of the terminal, so that a user can simply and directly acquire the virtual objects selected by other users in the same team through the virtual images of the virtual objects in the team image display interface of the terminal, the judgment time of the user on the virtual objects in the team forming process is reduced, the team forming efficiency is improved, and meanwhile, the interface display effect is also improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic block diagram of a terminal shown in accordance with an exemplary embodiment;
FIG. 2 is a schematic illustration of a display interface of a virtual scene shown in accordance with an exemplary embodiment;
FIG. 3 is a block diagram illustrating a virtual scene service system in accordance with an exemplary embodiment;
FIG. 4 is an interface diagram illustrating a virtual object presentation in the related art provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating an avatar presentation method for a virtual object provided in an exemplary embodiment of the present application;
FIG. 6 illustrates a flowchart of an avatar rendering method for a virtual object provided in an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a virtual object selection interface in accordance with an exemplary embodiment of the present application;
FIG. 8 illustrates a virtual object site diagram in accordance with an exemplary embodiment of the present application;
FIG. 9 illustrates a schematic diagram of a team character presentation shown in an exemplary embodiment of the present application;
FIG. 10 is a diagram illustrating a team avatar representation interface with updated virtual object poses in accordance with an exemplary embodiment of the present application;
FIG. 11 is a diagram illustrating a setup interface of an application according to an exemplary embodiment of the present application;
FIG. 12 illustrates a timing diagram of a first virtual object pose change illustrated in an exemplary embodiment of the present application;
FIG. 13 illustrates a schematic diagram of a settlement interface shown in an exemplary embodiment of the present application;
FIG. 14 illustrates a schematic diagram of a settlement interface shown in an exemplary embodiment of the present application;
FIG. 15 is a flowchart illustrating an avatar rendering method for a virtual object according to an exemplary embodiment of the present application;
FIG. 16 illustrates a flow chart of a method for avatar representation of a virtual object as illustrated in an exemplary embodiment of the present application;
fig. 17 is a block diagram illustrating a structure of an avatar presenting apparatus for a virtual object according to an exemplary embodiment of the present application;
fig. 18 is a block diagram illustrating an avatar presentation apparatus for a virtual object according to an exemplary embodiment of the present application;
FIG. 19 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 20 is a block diagram illustrating the architecture of a computer device in accordance with one exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application provides a virtual image display method of a virtual object, which can reduce the judgment time of a user on the virtual object in a team forming process and improve the team forming efficiency. For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on the terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene has virtual resources available for at least two virtual characters. Optionally, the virtual scene includes that the virtual world includes a square map, the square map includes a symmetric lower left corner region and an upper right corner region, virtual characters belonging to two enemy camps occupy one of the regions respectively, and a target building/site/base/crystal deep in the other region is destroyed to serve as a winning target.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, an animation character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object may be a three-dimensional stereo model. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, and the virtual character realizes different external images by wearing different skins. In some implementations, the virtual role can also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
3) Multi-person online tactical sports
The multi-person online tactical competition is characterized in that on a map provided by a virtual scene, different virtual teams belonging to at least two enemy camps respectively occupy respective map areas, and the competition is carried out by taking a certain winning condition as a target. Such winning conditions include, but are not limited to: the method comprises the following steps of occupying site points or destroying enemy battle site points, killing virtual characters of enemy battles, guaranteeing the survival of the enemy battles in a specified scene and time, seizing certain resources, and comparing the resource with the resource of the other party in the specified time. The tactical competition can be carried out in the unit of a game, and the map of each tactical competition can be the same or different. Each virtual team includes one or more virtual characters, such as 1,3, or 5.
4) MOBA (Multiplayer Online Battle Arena) game
The MOBA game is a game which provides a plurality of base points in the virtual world, and users in different camps control virtual characters to fight against, occupy the base points or destroy enemy camp base points in the virtual world. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual characters controlled by the user in the virtual world to compete with each other, so as to destroy or occupy all the points of enemy as winning conditions. The MOBA game is in hands, and the duration of a hand of the MOBA game is from the time of starting the game to the time of reaching a winning condition.
5) Team display interface
In the MOBA game, before each battle is started, an interface for a player to select a virtual character to be used in the battle is provided, in the interface, the player in the same team can see the virtual character selected by other players in the same team, so that the player can select the own character by combining the virtual characters of other players to determine the battle formation of the game.
6) Settlement interface
In the MOBA game, at the end of each battle, whether the battle of the team of the player succeeds or fails, the battle enters a battle settlement interface, and the battle settlement interface can display the battle performance, the obtainable reward, the experience and the like of each player in the team of the player in the local game.
Meanwhile, interaction operations such as approval, gift sending and the like can be performed among players on the same team in the settlement interface.
7) Game skin
The game skin refers to the shape of the virtual character, and the initial shape of the virtual character is called the default skin. In some games, the virtual skin may provide a game character with an attribute addition of some attribute, such as increasing the attack or defense of the virtual character, and so on.
8) Model display animation
The model display animation refers to a section of motion displayed by a 3D (Three Dimensional) model of a virtual character in a game interface, and the motion may be long or short, and may also include a corresponding special effect, so as to improve the visibility and the appreciation of the virtual character.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server cluster 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual scene, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client can be any one of a MOBA game, a large fleeing and killing shooting game and an SLG game. In the present embodiment, the client is an MOBA game for example. The first terminal 110 is a terminal used by the first user 101, and the first user 101 uses the first terminal 110 to control a first virtual character located in the virtual scene to perform an activity, where the first virtual character may be referred to as a master virtual character of the first user 101. The activities of the first avatar include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual scene, and the client 131 may be a multi-player online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on a screen of the second terminal 130. The client may be any one of a MOBA game, a big-fleeing shooting game, and an SLG game, and in the present embodiment, the client is the MOBA game for example. The second terminal 130 is a terminal used by the second user 102, and the second user 102 uses the second terminal 130 to control a second virtual character located in the virtual scene to perform an activity, where the second virtual character may be referred to as a master virtual character of the second user 102. Illustratively, the second avatar is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual scene. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP1MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server cluster 120 in different embodiments. Optionally, one or more terminals 140 are terminals corresponding to the developer, a development and editing platform for a client of the virtual scene is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server cluster 120 through a wired or wireless network, and the first terminal 110 and the second terminal 110 can download the client installation package from the server cluster 120 to update the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server cluster 120 through a wireless network or a wired network.
The server cluster 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster 120 is used for providing background services for clients supporting three-dimensional virtual scenes. Optionally, the server cluster 120 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 120 undertakes the secondary computing work, and the terminal undertakes the primary computing work; alternatively, the server cluster 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, server cluster 120 includes servers 121 and servers 126, where servers 121 include a processor 122, a user account database 123, a combat service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 121 and the combat service module 124; the user account database 121 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the user to fight, such as 1V1 fight, 3V3 fight, 5V5 fight, etc.; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data. Optionally, an intelligent signal module 127 is disposed in the server 126, and the intelligent signal module 127 is used for implementing the avatar display method of the virtual object provided in the following embodiments.
Fig. 2 is a diagram illustrating a map provided by a virtual scene of a MOBA game according to an exemplary embodiment of the present application. The map 200 is square. The map 200 is diagonally divided into a lower left triangular region 220 and an upper right triangular region 240. There are three routes from the lower left corner of the lower left triangular region 220 to the upper right corner of the upper right triangular region 240: an upper lane 21, a middle lane 22 and a lower lane 23. In a typical game, 10 avatars are required to divide into two teams for competition. The 5 avatars of the first camp occupy the lower left triangular area 220 and the 5 avatars of the second camp occupy the upper right triangular area 240. The first camp takes the whole base points which destroy or occupy the second camp as winning conditions, and the second camp takes the whole base points which destroy or occupy the first camp as winning conditions.
Illustratively, the sites of the first campaign include: 9 defensive towers 24 and a first base 25. Wherein, there are 3 in 9 defence towers 24 respectively on the way 21, the way 22 and the way 23; the first base 25 is located at the lower left corner of the lower left triangular region 220.
Illustratively, the second formation site includes: 9 defensive towers 24 and a second base 26. Wherein, there are 3 in 9 defence towers 24 respectively on the way 21, the way 22 and the way 23; the second base 26 is located in the upper right corner of the upper right triangular area 220.
The position of the dotted line in fig. 2 may be referred to as a river channel area. The river channel area belongs to a common area of a first camp and a second camp, and is also a bordering area of a lower left triangular area 220 and an upper right triangular area 240.
The MOBA game requires each virtual character to acquire resources in the map 200, thereby improving the combat ability of the virtual character. The resources include:
1. the soldiers who periodically appear on the upper road 21, the middle road 22 and the lower road 23 get experience and gold coins when the soldiers are killed.
2. The middle road (diagonal line from bottom left to top right) and the river channel area (diagonal line from top left to bottom right) can be divided into 4 triangular areas A, B, C and D (also called four wild areas) as dividing lines, wild monsters can be periodically refreshed in the 4 triangular areas A, B, C and D, and when the wild monsters are killed, the nearby virtual characters can obtain experience, gold coins and gain (BUFF) effects.
3. There are periodically refreshed major and minor dragons 27, 28 in two symmetrical positions in the river area. When the dragon 27 and the dragon 28 are killed, the virtual characters in the killing party camp all obtain experience, gold coins and BUFF effects. The major dragon 27 may be referred to by other names such as "leading" or "kaiser" and the minor dragon 28 may be referred to by other names such as "tyrant" or "magic dragon".
In one example, there is a monster of gold coins at the up and down riverways, each appearing at 30 seconds of opening. And obtaining gold coins after killing, and refreshing for 70 seconds.
And a region A: there were one red BUFF, two common monsters (one pig and one bird), and one tyrant (small dragon). Red BUFF and wilderness appeared at 30 seconds of opening, refreshed at 70 seconds after ordinary wilderness kill, and refreshed every 90 seconds after red BUFF kill.
The tyrant appears 2 minutes after opening the game, refreshes in three minutes after killing, and obtains the gold coins and experience rewards for the whole team after killing. The gentleman falls into the dark in 9 minutes and 55 seconds, the dark gentleman appears in 10 minutes, and the revenge BUFF of the gentleman is obtained by killing the dark gentleman.
And a B region: there was a blue BUFF, two common wild (a wolf and a bird), which also appeared in 30 seconds and was refreshed every 90 seconds after killing.
And a C region: zone C is identical to zone B, two common monsters (a wolf and a bird), and also blue BUFF appears for 30 seconds, refreshed every 90 seconds.
And (3) region D: zone D is similar to zone a, a red BUFF, two common monsters (one pig and one bird), and a red BUFF also increases output and decelerates. And the other one is mainly (Dalong). The main slaughter appears 8 minutes after the opening of the house, and is refreshed five minutes after the slaughter, and the slaughter main slaughter can obtain a main slaughter BUFF, a bridle BUFF and an on-line main slaughter pioneer (or a manually summoned sky dragon (also called a bone dragon)).
In one illustrative example, BUFF details:
red BUFF: lasting 70 seconds, the attack will be accompanied by persistent burning injury and deceleration.
Blue BUFF: lasting for 70 seconds, the cooling time can be shortened, and a certain normal force is additionally recovered every second.
Killing the darkling june to obtain the darkling june BUFF and the friedelian BUFF:
and (3) dark tyrant BUFF: increase the physical attack of the whole team (80 +5% of the current attack), increase the legal attack of the whole team (120 +5% of the current legal attack), and continue for 90S.
Trip BUFF: the output of the dominator is reduced by 50 percent, and the death does not disappear and lasts for 90 seconds.
The killing and main slaughter can obtain the main slaughter BUFF and the friendship BUFF:
dominating BUFF: can improve the life recovery and normal recovery of the whole team by 1.5 percent per second. Lasting 90 seconds. Death will lose the dominating BUFF.
Trip BUFF: the output of the drug to the blaere junior is reduced by 50 percent, and the death does not disappear and lasts for 90 seconds.
The benefits can be obtained after the slaughtering and the main slaughtering:
1. the team members receive 100 coins and gain benefits regardless of whether the master virtual character is not participating in the lead, including the master virtual character on the revived CD.
2. From the moment of killing the main front, the next three wave (three paths) soldiers on the side of killing are all changed into main front (flying dragon). The leading pioneer is very powerful, and can push in three ways simultaneously, bringing huge soldier line pressure to the opponent, the opponent needs to be defended in a shunting way. The alarm of leading pioneer is given out on the map, and the leading pioneer is suggested to come in wave number (typically three waves) in the middle.
The combat capability of 10 avatars includes two parts: grades and equipment, the grades being obtained from accumulated empirical values, and the equipment being purchased from accumulated gold coins. The 10 avatars can be obtained by the server by matching 10 user accounts online. Illustratively, the server matches the interfaces of 2 or 6 or 10 user accounts for competition in the same virtual world online. The 2 or 6 or 10 virtual characters belong to two enemy camps respectively, and the number of the virtual characters corresponding to the two camps is the same. For example, each camp has 5 virtual roles, and the division of the 5 virtual roles may be: warrior-type characters, licker-type characters, jurisdictional-type characters, assistive (or carnot-type) characters, and shooter-type characters.
The battle can be carried out by taking a station as a unit, and the map of the battle in each station can be the same or different. Each of the banks includes one or more avatars, such as 1,3, or 5.
In a virtual scene requiring team cooperation, victory is required to be obtained through cooperation between teammates, so that in the process of selecting a virtual object, the virtual object of the team is required to be determined according to the virtual object selected by the teammates, and the battle array is reasonably distributed. In the related art, the virtual objects selected by other users in the same team are generally represented in the form of thumbnails of the virtual objects selected by teammates through a two-dimensional plan view, and referring to fig. 3, an interface diagram of the avatar representation of the virtual objects in the related art provided by an exemplary embodiment of the present application is shown, as shown in fig. 3, which is a team interface, in which avatars of all the virtual objects selected by users in the same team are represented in an area 310 in a terminal display interface 300, in which only avatar thumbnails of the virtual objects are displayed while user names of the selected corresponding virtual objects are displayed, which may result in that specific information of the virtual objects selected by other users in the same team, such as complete avatars of the virtual objects, names of the virtual objects, and the like, may not be intuitively known to users who are unfamiliar with the two-dimensional avatars of the respective virtual objects, thereby resulting in that the users need to judge the virtual objects selected by other users in the team process, increasing the team time, and thus making the team efficiency of the team low.
Referring to fig. 4, which illustrates an interface diagram of avatar display of a virtual object in the related art according to an exemplary embodiment of the present application, as shown in fig. 4, the interface is a settlement interface, avatars of all users in the same team are displayed in an area 410 in a terminal display interface 400, in the area, the avatar of the virtual object is displayed by a 2D avatar/2D bust/2D full-length avatar, and a name of a user selecting the corresponding virtual object is displayed at the same time, however, a display effect of a two-dimensional virtual model is thin, and the avatar cannot be completely reflected, so that the interface display effect is poor.
To solve the above technical problem, the present application provides a method for displaying an avatar of a virtual object, please refer to fig. 5, which shows a flowchart of a method for displaying an avatar of a virtual object according to an exemplary embodiment of the present application, where the method for displaying an avatar of a virtual object may be performed by a terminal, a server, or alternatively performed by a terminal and a server, where the terminal and the server may be the terminal and the server in the system shown in fig. 3. As shown in fig. 5, the avatar representing method of a virtual object may include the steps of:
and 510, responding to the team image display instruction, and acquiring attitude information, wherein the attitude information is used for indicating the respective attitude of each virtual object in the target team.
Each virtual object in the target team may be animated in a virtual scene as shown in fig. 2.
In a possible implementation manner, the team image display instruction may be an instruction sent by a user by clicking a team image display control in the terminal display interface, where the team image display control is used to instruct to switch the current terminal display interface to the team image display interface after receiving a touch operation of the user;
or, the team image display instruction may be issued by an application program preset based on a specific virtual scene, and the instruction is used for instructing the application program to switch the current terminal display interface to the team image display interface.
The team image display interface may include display interfaces in at least two scenes, for example, the at least two scenes may include a scene of a virtual object selected by each user and displayed when the user performs team formation, or may include a scene when the team just enters the virtual scene, or may include a settlement scene after the team leaves the virtual scene, and the like.
In the embodiment of the present application, the pose of the virtual object may be visually expressed as a model exhibition animation of the virtual object, each virtual object may have at least one pose, the model exhibition animation of the initial virtual object may be referred to as a default pose of the virtual object, and other poses besides the default pose of the virtual object may be obtained through other approaches, for example, in a game, through channels of drawing cards, purchasing, exchanging, giving away, and the like.
The posture of the virtual object may include, but is not limited to, factors such as an action, a background, and a special effect of the virtual object, different postures of the same virtual object may be different in at least one of the factors such as the action, the background, and the special effect of the virtual object, the different postures may correspond to different posture information, and the corresponding posture of the virtual object may be generated according to the virtual information.
Step 520, generating a three-dimensional avatar of each virtual object based on the pose information.
In the embodiment of the present application, the avatar of each virtual object is a three-dimensional avatar, and the posture information includes data information required for generating the three-dimensional avatar of each virtual object.
In a possible implementation manner, at least one posture of each virtual object corresponds to each posture number, and the corresponding posture information can be queried by obtaining the posture number.
Step 530, displaying a team image display interface in the first terminal corresponding to the first virtual object, wherein the team image display interface comprises three-dimensional virtual images of all the virtual objects; the first virtual object is any virtual object in the target team.
The first terminal logs in a first user account, and the first user account corresponds to the first virtual object, that is, the terminal (i.e., the first terminal) that logs in the first user account has a right to control the first virtual object in the virtual scene, that is, the user can control the first virtual object in the virtual scene through the first terminal.
The application program supporting the virtual scene is installed in the first terminal, and the first user account corresponding to the first terminal can log in the application through the first user account in the first terminal, so that a team image display interface is displayed in the first terminal.
Each virtual object refers to a virtual object selected by each user account among candidate virtual objects provided in an application program, and the virtual objects are usually divided into different professions, are set with different attribute values, are configured with different skills, and the like.
The first virtual object is any virtual object in the target team, that is, for any virtual object in the target team, the first virtual object is the first virtual object itself, for example, in a target team consisting of 5 virtual objects a, B, C, D, E, the virtual object a is the first virtual object; for the virtual object B, the virtual object B is a first virtual object \8230, the virtual object 8230is displayed, and a team image display interface is displayed in the terminal display interface corresponding to each virtual object.
In one possible implementation manner, the name of the virtual object may be displayed in the team character display interface, and the name of the virtual object may be displayed on the lower portion of the avatar corresponding to the virtual object, or may be displayed on the upper portion or other areas of the avatar corresponding to the virtual object.
In summary, according to the method for displaying the avatar of the avatar, provided by the embodiment of the present application, the three-dimensional avatar of each avatar is displayed in the team avatar display interface of the terminal, so that the user can simply and directly obtain the avatar selected by other users in the same team through the avatar of the avatar in the team display interface of the terminal, thereby reducing the judgment time of the user on the avatar in the team forming process, improving the team forming efficiency, and simultaneously improving the interface display effect.
Referring to fig. 6, a flowchart of an avatar rendering method of a virtual object according to an exemplary embodiment of the present application is shown, where the avatar rendering method of the virtual object may be executed by a terminal, a server, or alternatively by the terminal and the server, where the terminal and the server may be the terminal and the server in the system shown in fig. 3. As shown in fig. 6, the avatar representation method of a virtual object may include the steps of:
and step 610, responding to the team image display instruction, and acquiring attitude information, wherein the attitude information is used for indicating the respective attitude of each virtual object in the target team.
In a possible implementation manner, each virtual object corresponds to a respective posture, and each virtual object may have at least one posture, and when a team image display instruction is received, the obtained posture information may be default posture information of the virtual object, where the default posture information is used to indicate a respective default posture of each virtual object in a target team, where the default posture corresponding to each virtual object may be one of the at least one posture corresponding to each virtual object, and the default posture may be an initial posture of a user when the user obtains the virtual object, or the default posture may be set by the user himself, or an application program may set according to a frequency of use of each posture of the virtual object by the user.
In a possible implementation manner, the team image display instruction may be an instruction issued by a user by clicking a team image display control in a terminal display interface, and in a first terminal, the team image display control may be a control used by a first terminal user to lock a virtual object, please refer to fig. 7, which shows a schematic diagram of a virtual object selection interface according to an exemplary embodiment of the present application, as shown in fig. 7, candidate virtual objects are displayed in a region 710 of the virtual object selection interface, the number of the candidate virtual objects is at least two, each candidate virtual object has a corresponding virtual object name, and based on a selection operation of the candidate virtual object by the user, elements such as skills and attributes of the virtual object may be displayed in the region 720, and when the user confirms that one of the candidate virtual objects is selected as a virtual object, the user may lock the candidate virtual object by clicking a control 730 in the virtual object selection interface, that the candidate virtual object is determined as the first virtual object. After receiving the touch operation of the user based on the control 730, the terminal or the server switches the virtual object selection interface to a team image display interface, that is, the control 730 is the team image display control, and the team image display instruction may be an instruction for locking the first virtual object for the user.
And step 620, generating a three-dimensional virtual image of each virtual object based on the posture information.
In one possible implementation, the pose information includes pose information corresponding to all virtual objects in the target team, and if there are virtual objects in all virtual objects in the target team that have not been determined, the pose information of the virtual objects that have not been determined may be obtained as base pose information indicating the virtual objects that have not been determined in the target team. For example, if the first user of the first terminal has locked the first virtual object and issued the team image display instruction among 5 users in the same team, but there are users who have not determined the virtual object among the other 4 users at this time, that is, there is an undetermined virtual object in the target team, then the posture information of the undetermined virtual object may be acquired as the basic posture information.
In a possible implementation manner, a preset mark may be generated based on the basic pose information, and the preset mark may be a two-dimensional avatar, or may also be a three-dimensional avatar, for example, the virtual mark may be a preset picture and/or text, or may also be a preset basic model.
Step 630, obtaining the arrangement sequence of each virtual object in the target team.
In one possible implementation, the ranking order refers to the order in which the virtual objects join the target team.
In the team formation process, the confirmation time of each virtual object is determined by the end user controlling the virtual object, and different end users need different time for selecting and determining the virtual object to be used, so that the adding sequence of each virtual object to the target team is different.
In one possible implementation manner, each virtual object has different arrangement numbers corresponding to the arrangement sequence in the team, for example, the arrangement number of the virtual object of the first joining target team is 1; the arrangement number of the virtual object of the second joining target team is 2; the n-th virtual object joining the target team has an array number n, and the order of the virtual objects in the target team can be confirmed based on the array numbers of the different virtual objects.
And step 640, determining the positions of the three-dimensional avatars of the virtual objects in the target scene respectively based on the arrangement sequence of the virtual objects in the target team.
In one possible implementation manner, determining the position of the three-dimensional avatar of each virtual object in the target scene may be implemented as:
setting a three-dimensional virtual image of a first virtual object to be positioned at the middle position in a target scene;
and determining the position of the three-dimensional virtual image of the second virtual object in the target scene according to the relation between the arrangement sequence of the first virtual object in the target team and the arrangement sequence of the second virtual object in the target team, wherein the second virtual object is other than the first virtual object in the target team.
That is, in the team image display interface, the first virtual object is always kept at the middle position in the target scene, for example, in the target team composed of 5 virtual objects a, B, C, D, E, in the team image display interface of the terminal corresponding to the virtual object a, the three-dimensional virtual image of the virtual object a is at the middle position in the target scene; and in a team display interface of the terminal corresponding to the virtual object B, the three-dimensional virtual image of the virtual object B is in the middle position in the target scene, and the like.
After the position of the three-dimensional avatar of the first virtual object in the target scene is determined, the positions of the three-dimensional avatars of the other virtual objects in the same team need to be determined according to the position of the three-dimensional avatar of the first virtual object in the target scene and the relationship between the arrangement sequence of the other virtual objects and the first virtual object.
In a possible implementation manner, the position of the three-dimensional avatar of the second virtual object in the target scene may be determined according to the arrangement order of the second virtual object relative to the first virtual object, please refer to fig. 8, which shows a schematic diagram of the virtual object station shown in an exemplary embodiment of the present application, as shown in fig. 8, 5 station points, that is, the station points 0,1,2,3,4 shown in fig. 8, may be preset in the target scene. Assuming that the number of station sites of second virtual objects is X, the total number of persons that can be accommodated in the target teams is C (0 < -C ≦ 5), the arrangement number of the second virtual objects in the target teams is N (0 < -N ≦ 5), the arrangement number of the first virtual objects in the target teams is M (0 < -M ≦ 5), the method of determining the positions of the second virtual objects in the target scene may be implemented as follows:
respectively acquiring the arrangement number M of a first virtual object in a target team and the arrangement number N of a second virtual object in the target team;
calculating a relative arrangement number Y of the second virtual object with respect to the first virtual object based on the arrangement number M of the first virtual object in the target team;
and determining the position of the three-dimensional virtual image of the second virtual object in the target scene according to the mapping relation between the relative arrangement number Y and the station position point X.
The calculation process of the relative arrangement number Y may be implemented as:
when M < N, Y = N-M;
when M > N, Y = (C-M) + N;
when M = N, Y =0, i.e., the second virtual object is the first virtual object.
The preset mapping relationship between the relative arrangement number Y and the station location point X may be:
when C ≦ 3, Y = {0,1,2}, X = {0,1,4}, that is, when the total number of virtual objects that can be accommodated in the target team is at most 3 people, for example, 3 people, when Y =0, X =0, that is, the second virtual object is the first virtual object whose station point is station point 0 shown in fig. 8; when Y =1, X =1, that is, the station site of the second virtual object is station site 1 shown in fig. 8; when Y =2, X =2, that is, the station site of the second virtual object is the station site 4 shown in fig. 8.
When C =4, Y = {0,1,2,3}, X = {0,1,2,4}, that is, X = {0,1,3,4}, that is, the total number of persons of virtual objects that can be accommodated in the target team is at most 4 persons, and when Y =0, X =0, that is, the second virtual object is the first virtual object whose station point is station point 0 shown in fig. 8; when Y =1, X =1, that is, the station site of the second virtual object is station site 1 shown in fig. 8; when Y =2, X =2 or 3, that is, the station site of the second virtual object is station site 2 or station site 3 shown in fig. 8; when Y =3, X =4, that is, the station site of the second virtual object is station site 4 shown in fig. 8.
When C =5, Y = {0,1,2,3,4}, X = {0,1,2,3,4}, that is, at; when Y =0, X =0, that is, the second virtual object is the first virtual object and the station point thereof is the station point 0 shown in fig. 8; when Y =1, X =1, that is, the station site of the second virtual object is station site 1 shown in fig. 8; when Y =2, X =2, that is, the station site of the second virtual object is station site 2 shown in fig. 8; when Y =3, X =3, that is, the station site of the second virtual object is the station site 3 shown in fig. 8; when Y =4, X =4, that is, the station site of the second virtual object is the station site 4 shown in fig. 8.
Alternatively, the mapping relationship between the relative arrangement number Y and the station location X may be:
when C is less than or equal to 3, Y = {0,1,2}, X = {0,4,1};
when C =4, Y = {0,1,2,3}, X = {0,4,3,1} or X = {0,4,2,1};
when C =5, Y = {0,1,2,3,4}, and X = {0,4,3,2,1}.
The two rules can be understood as that the position of the three-dimensional avatar of the second virtual object in the target scene is determined clockwise or counterclockwise according to the arrangement sequence of the second virtual object relative to the first virtual object, taking the position of the three-dimensional avatar of the first virtual object in the target scene as a starting point.
In one possible implementation, the positions of the three-dimensional avatars of the virtual objects in the target scene may be determined directly according to the order of the virtual objects in the target team, that is, the station position numbers in the target scene are fixed, the virtual object of the first joining target team is located at station position 0, the virtual object of the second joining target team is located at station position 1, the virtual object of the third joining target team is located at station position 2, the virtual object of the fourth joining target team is located at station position 3, and the virtual object of the fifth joining target team is located at station position 4.
It should be noted that, the above description of the station location number and the arrangement sequence of the virtual objects is only illustrative, and does not limit the method for determining the position of the three-dimensional avatar of the virtual object in the target scene provided in the present application.
Step 650, displaying a team image display interface including a target scene picture, which is a picture of observing a target scene at a specified viewing angle, in the first terminal, the target scene being a three-dimensional scene provided with three-dimensional avatars of respective virtual objects.
In one possible implementation manner, the three-dimensional avatar of each virtual object in the target scene picture is rotatable, that is, the three-dimensional virtual model of the virtual object can be observed from each direction in the target scene picture, wherein the implementation process of the three-dimensional virtual model rotation of the virtual object can be implemented as follows:
and S6501, receiving touch operation of a user based on the target scene picture.
In one possible implementation, the touch operation may be a long press sliding operation.
S6502, acquiring action coordinates of the touch operation on the target scene picture.
In one possible implementation, a reference coordinate system may be selected in the target scene to describe the position of the camera and the object in the target scene, the reference coordinate system may be a camera coordinate system, the optical center of the camera may be taken as the origin of the camera coordinate system, the X-axis and the Y-axis are parallel to the X-axis and the Y-axis of the image, the z-axis is the camera optical axis, which is perpendicular to the image plane, and the camera coordinate system is a three-dimensional coordinate system.
S6503, with the action coordinate as a starting point, emitting a ray to a preset direction, and acquiring a collision area receiving the ray, wherein the collision area is one of collision areas which are pre-placed on each station point in the target scene picture.
In a possible implementation, the preset direction may be towards the optical axis of the camera, i.e. the z-axis direction of the camera coordinate system.
S6504, rotating the station position where the collision area receiving the ray is located based on the touch operation, so as to rotate the three-dimensional virtual image of the virtual object located on the station position.
In one possible implementation manner, the rotation angle and the rotation speed of the station point may be determined according to the touch operation, and the rotation angle of the station point and the rotation speed S of the station point may be obtained by mapping according to the sliding distance of the touch operation, where the mapping relationship may be:
ΔX=L*α
wherein Δ X represents a rotation angle of the station point, L represents a sliding distance of the touch operation, and α represents a sliding coefficient, which can be adjusted according to actual requirements.
When the touch operation is finished, that is, when the user stops the rotation operation on the virtual object, the mapping relation is stopped, the slow stop operation on the virtual object is triggered, and the rotation speed S of the updated site is calculated through each frame of target scene picture, so that the rotation angle of each frame of site can be changed according to the rotation speed of the site updated in real time, the effect that the rotation of the virtual object is slowly stopped is realized, and the visual experience of the user is increased, wherein the rotation speed S of the updated site can be realized as follows:
S=S 1 -t*β
wherein S represents the rotation speed of the current site, S 1 The rotating speed of the station point in the previous frame is represented, t represents the interval time for refreshing each frame of target scene picture, and beta represents a wind resistance coefficient which can be adjusted according to actual requirements.
The current rotation angle of the site location is:
X=X 1 +S
wherein X represents the current rotation angle of the station site, X 1 Indicating the rotation angle of the site in the previous frame.
In a possible implementation manner, the team image display interface can be implemented as a team image display interface before the virtual scene starts or can also be implemented as a settlement interface after the virtual scene ends.
In response to the team avatar display interface being a team avatar display interface prior to the start of the virtual scene, the method further comprises:
in a team image display interface, a three-dimensional virtual image display posture selection control corresponding to a first virtual object is arranged;
in response to an operation of the gesture selection control, determining a new gesture of the first virtual object;
and updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new posture of the first virtual object.
Referring to fig. 9, which illustrates a schematic diagram of a team avatar display interface according to an exemplary embodiment of the present application, as shown in fig. 9, at least one virtual object is displayed in the team avatar display interface, and a first virtual object is located at an intermediate position of a target scene picture, a gesture selection control 910 is present in the team display interface, and a user may switch a gesture of the first virtual object through an operation of the gesture selection control 910 to update a three-dimensional avatar of the first virtual object in the team avatar display interface.
For example, referring to fig. 10, which illustrates a schematic diagram of a team avatar presentation interface after the virtual object posture is updated according to an exemplary embodiment of the present application, as shown in fig. 10, after the user operates the posture-based selection control 1010, the user may present a selection sub-control 1011 corresponding to a different posture at the posture-based selection control 1010, and the user may change the three-dimensional avatar of the first virtual object based on the selection sub-control.
In one possible implementation, the change in the three-dimensional avatar of the first virtual object may also be:
acquiring at least two postures of a first virtual object in response to receiving continuous touch operation of a three-dimensional virtual image based on the first virtual object;
and according to the appointed posture sequence, sequentially updating the three-dimensional virtual images of the first virtual image corresponding to the at least two postures in the team image display interface.
In a possible implementation manner, the presentation order of the three-dimensional avatar of the first virtual object may be set on a setting interface of the application program, please refer to fig. 11, which shows a schematic diagram of the setting interface of the application program shown in an exemplary embodiment of the present application, as shown in fig. 11, in a presentation order adjusting area 1110 of the setting interface, the presentation order of the three-dimensional avatar of the first virtual object may be adjusted, for example, in the illustration, a default posture is currently located at a first position of the presentation order, and a daunting posture is located at a second position, the user may adjust the daunting posture located at the second position to the first position by dragging and the like, adjust the default posture of the first position to any position of the presentation order, and the presentation orders of other postures may also be sequentially adjusted according to the user requirements.
In a possible implementation manner, the updated three-dimensional avatar of the first virtual object is synchronized to terminals corresponding to other virtual objects in the team for displaying.
That is to say, the update of the three-dimensional avatar of each virtual object is synchronized to the terminals corresponding to other virtual objects in the same team, please refer to fig. 12, which shows a timing chart of the change of the posture of the first virtual object according to an exemplary embodiment of the present application, as shown in fig. 12, the change of the posture of the first virtual object may be implemented as follows:
s1201, the user selects the posture of the first virtual object based on the team image display interface.
S1202, responding to the selection operation of the user on the posture of the first virtual object, the team image display interface transmits the posture number corresponding to the posture manager, and correspondingly, the posture manager receives the posture number.
Each gesture corresponds to a gesture number, and the generation mode can be as follows: the attitude number = virtual object number + 1000+ a, where a may be a random number selected according to actual conditions.
And S1203, the posture manager acquires the posture according to the posture number, and verifies the legality of the corresponding posture to acquire a verification result.
S1204, respond to the check result and point out the posture is legal, the posture supervisor sends C/S (Client/Server) message to the Server, the correspondent, the Server receives C/S message.
C/S (Client/Server) architecture, i.e., client and Server architecture. The software system architecture can fully utilize the advantages of hardware environments at two ends, and reasonably distributes tasks to a Client end and a Server end to realize the task, thereby reducing the communication overhead of the system.
And S1205, the server issues a synchronization instruction to the attitude manager according to the received C/S message.
And S1206, after receiving the synchronization instruction, the attitude manager loads the attitude stream and loads the attitude number to the local.
The gesture of the unlocked gesture of the virtual object is stored in gesture management, when a certain gesture of the virtual object needs to be generated, a gesture stream of the gesture, namely a data sequence of the gesture, can be converted into a Json string through a gesture manager, and is written into a disk of local equipment and stored in a mode of landing on the ground into a file.
S1207, the posture manager synchronizes the posture numbers to all terminals in the target team, so that the three-dimensional virtual image updated by the first virtual object is displayed in all the terminals corresponding to all the virtual objects in the target team.
In one possible implementation manner, in a team image display interface, a three-dimensional virtual image display appearance selection control corresponding to a first virtual object;
in response to an operation of the appearance selection control, determining a new appearance of the first virtual object;
and updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new appearance of the first virtual object.
As shown in fig. 9, the appearance control can be the control 920 shown in fig. 9.
The appearance of the virtual object may be represented as a skin of the virtual object, i.e., an exterior of the virtual object, and each virtual object may correspond to at least two types of skins.
The appearance display control can be represented as a two-dimensional thumbnail corresponding to each skin corresponding to the virtual object, a user can preview each skin through sliding operation of the appearance display control, and the skin is selected through touch operation of the appearance display control.
In a possible implementation manner, when the user obtains the gesture of a certain virtual object, the same-name gestures of all skins of the virtual object may be unlocked, that is, the virtual object may have multiple shapes in the same gesture, or, when the user obtains the gesture of a certain virtual object, a specific skin may be unlocked correspondingly, that is, one skin of the virtual object corresponds to one gesture.
In a possible implementation manner, the updated three-dimensional avatar of the first virtual object is synchronized to terminals corresponding to other virtual objects in the target team for displaying.
The method is used for synchronizing the three-dimensional avatar of the first virtual object after the appearance is updated to the terminals corresponding to other virtual objects in the target team for displaying.
In one possible implementation manner, the team image display interface is switched to the virtual object selection interface in response to the operation of the user on the virtual object switching control in the team image display interface.
That is, as shown in fig. 9, in response to the user's operation of the virtual object toggle control 930 in the team character presentation interface, the team character presentation interface shown in fig. 9 is toggled to the virtual object selection interface shown in fig. 7.
In a possible implementation manner, in the team image display interface, if there is an undetermined virtual object, a preset mark is displayed at a corresponding position point of the undetermined virtual object, where the preset mark may be a two-dimensional virtual image or a three-dimensional virtual image, for example, the virtual mark may be a preset picture and/or text or a preset basic model. Taking the virtual mark as a two-dimensional avatar as an example, as shown in fig. 9, a "battle flag" picture is displayed in a region 940 in fig. 9 to indicate that the virtual object in the region 940 is not determined yet, and when the undetermined virtual object in the region 940 is determined, the three-dimensional avatar of the virtual object is displayed at a corresponding position point, as shown in fig. 10, the undetermined virtual object in the team-form-image display interface displays, after being determined, the three-dimensional avatar 1020 corresponding to the virtual object at the corresponding position point in the team-form-image display interface.
In one possible implementation, the team image display interface is a settlement interface after the virtual scene is finished, and the method further includes:
displaying an interactive control corresponding to the three-dimensional virtual image of the second virtual object in the team image display interface; the second virtual object is a virtual object other than the first virtual object in the target team;
and responding to the triggering operation of the interactive control, and executing the interactive operation with the second virtual object.
In a possible implementation manner, the three-dimensional avatar of the virtual object in the target team displayed by the settlement interface may be the three-dimensional avatar of each virtual object after the last modification before the target team enters the virtual scene, that is, assuming that the three-dimensional avatar of each virtual object shown in fig. 10 is the three-dimensional avatar of each virtual object after the last modification before the target team enters the virtual scene, after the virtual scene ends, the entered settlement interface is still displayed with the three-dimensional avatar of each virtual object shown in fig. 10. Referring to fig. 13, which is a schematic diagram of a settlement interface according to an exemplary embodiment of the present application, as shown in fig. 13, in the team character display interface, an interactive control 1310 is displayed in the three-dimensional avatar corresponding to the second virtual object, where the interactive control 1310 may include a approval control, a give-away control, a friend adding control, and the like.
And responding to the triggering operation of the interactive control, and executing the interactive operation with the second virtual object.
For example, in a 5V5 MOBA game, two teams participate in the game in each game, 5 players exist in each team, each player corresponds to one virtual object, it is assumed that player a controls a virtual object in the first terminal, that is, the virtual object controlled by player a is the first virtual object, for player a, terminals controlled by other 4 players in the team are the second terminals, and each second terminal may correspond to one user account, that is, the second user account, and the virtual objects controlled by 4 players are the second virtual objects.
In one possible implementation, there is an order of occurrence for the interactive controls.
In the settlement interface, firstly, a praise control is displayed in a region of the interactive control displayed on the three-dimensional virtual image corresponding to the second virtual object, the praise control is hidden in response to the praise control receiving touch operation of a user, and a presentation control and/or a friend adding control are displayed in the interactive control region.
After entering the gifting control and/or the friend-adding control through the complimentary control, in one possible implementation, the type of the interactive control may be changed according to the relationship between the first user account and the second user account.
If a friend has been added between the first user account and the second user account, that is, the first user account and the second user account are in a friend relationship, deleting a friend adding control from the interactive controls of the three-dimensional avatar corresponding to the virtual object, and retaining a presentation control, such as the interactive control 1311 in fig. 13;
if no friend is added between the first user account and the second user account, that is, the first user account and the second user account are not in a friend relationship, a friend adding control and a giving control, such as the interactive control 1312 in fig. 13, are displayed in the interactive control of the three-dimensional avatar corresponding to the virtual object.
In one possible implementation manner, the three-dimensional avatar of the second virtual object may be displayed in the team avatar display interface corresponding to the interaction result, for example, a like icon, a gift icon, and the like may be displayed at the three-dimensional avatar of the second virtual object, the like icon may be increased as the number of like icons increases, and the like icon may be increased as the number of gift gifts increases.
In one possible implementation manner, in response to receiving the interactive operation of the second virtual object, the three-dimensional avatar corresponding to the second virtual object displays the interactive information corresponding to the interactive operation.
That is to say, when the first virtual object receives an interaction operation of the second virtual object, interaction information corresponding to the interaction operation may be displayed at a three-dimensional avatar corresponding to the virtual object interacting with the first virtual object in a terminal display interface corresponding to the first virtual object, please refer to fig. 14, which shows a schematic diagram of a settlement interface shown in an exemplary embodiment of the present application, as shown in fig. 14, when the first virtual object receives the interaction operation of the second virtual object (in the diagram, a praise operation is performed), interaction information 1410 is displayed at the three-dimensional avatar corresponding to the second virtual object, and in the diagram, the interaction information is text information, optionally, the interaction information may also be expression information or picture information.
In summary, according to the method for displaying the avatar of the avatar, provided by the embodiment of the present application, the three-dimensional avatar of each avatar is displayed in the team avatar display interface of the terminal, so that the user can simply and directly obtain the avatar selected by other users in the same team through the avatar of the avatar in the team display interface of the terminal, thereby reducing the judgment time of the user on the avatar in the team forming process, improving the team forming efficiency, and simultaneously improving the interface display effect.
In the process of displaying the three-dimensional avatar of each virtual object in the team display interface, there may be situations such as a change of the avatar of the virtual object, a change of the pose of the virtual object, and a change of the appearance of the virtual object, taking a game scene as an example, the change of the avatar of the virtual object may be expressed as a player switching different hero characters, and any of the changes may affect the three-dimensional avatar of each virtual object displayed in the team display interface, and therefore, the three-dimensional avatar of each virtual object in the team display interface needs to be updated in real time, please refer to fig. 15, which shows a flowchart of an avatar display method of the virtual object according to an exemplary embodiment of the present application, and the avatar display method of the virtual object may be executed by a terminal, a server, or by a terminal and a server interactively, where the terminal and the server may be the terminal and the server in the system shown in fig. 3. As shown in fig. 15, the avatar representation method of a virtual object may include the steps of:
step 1501, receiving a first full message, where the first full message may be an avatar message of a target team acquired by a server in real time.
The first full message may be a message corresponding to the three-dimensional avatars of all the virtual objects in the target team at the current time, and the full message may be obtained and pushed in real time by the server and received by the terminal.
Step 1502, comparing the first full message with a second full message one by taking the virtual object as a unit, wherein the second full message is the full message cached by the terminal last time.
In step 1503, in response to the comparison object being the first virtual object, it is determined whether the first virtual object has locked the virtual role, if so, step 1204 is executed, otherwise, step 1205 is executed.
Step 1504, in response to the first virtual object locking the virtual character, determining whether the posture and the skin of the first virtual object are changed, if yes, executing step 1506, otherwise, ending the determination.
Step 1505, responsive to the virtual character of the virtual object not having been locked, destroys the three-dimensional avatar of the created virtual object.
If the virtual role of the virtual object is not locked, the virtual object is one of the candidate virtual objects previewed by the user and is not a virtual object confirmed to be used by the user.
Step 1506, updating the three-dimensional avatar of the virtual object in the team avatar display interface in response to a change in at least one of the pose and the skin of the virtual object.
The process may include, calculating a site location of the virtual object; generating a three-dimensional virtual image of the updated virtual object; and displaying the three-dimensional virtual image of the new virtual object in the team image display interface.
Step 1507, in response to the comparison object being the second virtual object, determining whether the second virtual object has the locked virtual character, if so, performing step 1508, otherwise, performing step 1505.
Step 1508, in response to the virtual character of the second virtual object being locked, determining whether the pose and skin of the second virtual object are changed, if yes, executing step 1509, otherwise ending the determination.
Step 1509, add the three-dimensional avatar information of the second virtual object to the buffer delay queue, wait for a delay, and in response to the end of the delay, perform step 1506.
In a possible implementation manner, the updated first total information corresponding to the target team may be overlaid on the second total information for the next comparison.
In summary, according to the method for displaying the three-dimensional images of the virtual objects provided by the embodiment of the application, the three-dimensional images of the virtual objects are displayed in the team image display interface of the terminal, and the three-dimensional images of the virtual objects are updated in real time, so that a user can simply and directly obtain the virtual objects selected by other users in the same team through the virtual images of the virtual objects in the team image display interface of the terminal, the judgment time of the users on the virtual objects in the team forming process is reduced, the team forming efficiency is improved, and the interface display effect is also improved.
Referring to fig. 16, a flowchart of an avatar rendering method for a virtual object according to an exemplary embodiment of the present application is shown, where the avatar rendering method for a virtual object may be performed by a terminal, which may be the terminal in the system shown in fig. 3, and the method may include:
step 1610, displaying a team image display interface, wherein the team image display interface comprises three-dimensional virtual images of all virtual objects in a target team;
and 1620, in response to the change of the posture information corresponding to the first virtual object, updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the posture information, wherein the posture information is used for indicating the respective posture of each virtual object in the target team when the first virtual object is any virtual image in the target team.
In summary, according to the method for displaying the three-dimensional images of the virtual objects provided by the embodiment of the application, the three-dimensional images of the virtual objects are displayed in the team image display interface of the terminal, and the three-dimensional images of the virtual objects are updated in real time, so that a user can simply and directly obtain the virtual objects selected by other users in the same team through the virtual images of the virtual objects in the team image display interface of the terminal, the judgment time of the users on the virtual objects in the team forming process is reduced, the team forming efficiency is improved, and the interface display effect is also improved.
Referring to fig. 17, a block diagram of a virtual image displaying apparatus of a virtual object according to an exemplary embodiment of the present application is shown, where the virtual image displaying apparatus of the virtual object may be implemented as a part of a terminal or a server in software or hardware or a combination of software and hardware, so as to perform all or part of the steps of the method shown in any one of fig. 5, 6, 12 or 15. The avatar display apparatus of the virtual object may include:
a posture information obtaining module 1710, configured to respond to a team image display instruction, and obtain posture information, where the posture information is used to indicate respective postures of each virtual object in a target team;
a generating module 1720 for generating a three-dimensional avatar of each virtual object based on the pose information;
the display module 1730 is configured to display a team image display interface in the first terminal corresponding to the first virtual object, where the team image display interface includes three-dimensional virtual images of the virtual objects; the first virtual object is any virtual object in the target team.
In a possible implementation manner, the display module 1730 is configured to display a team image display interface including a target scene picture in the first terminal, where the target scene picture is a picture obtained by observing a target scene at a specified viewing angle, and the target scene is a three-dimensional scene provided with a three-dimensional avatar of each virtual object.
In one possible implementation, the apparatus further includes:
the arrangement sequence acquisition module is used for acquiring the arrangement sequence of each virtual object in the target team;
and the position determining module is used for determining the positions of the three-dimensional virtual images of the virtual objects in the target scene respectively based on the arrangement sequence of the virtual objects in the target team.
In one possible implementation, the position determining module includes:
the setting submodule is used for setting the middle position of the three-dimensional virtual image of the first virtual object in the target scene;
and the position determining sub-module is used for determining the position of the three-dimensional virtual image of the second virtual object in the target scene according to the relation between the arrangement sequence of the first virtual object in the target team and the arrangement sequence of the second virtual object in the target team, wherein the second virtual object is other virtual objects except the first virtual object in the target team.
In one possible implementation, in response to the team avatar display interface being a team avatar display interface prior to the start of the virtual scene, the apparatus further comprises:
in a team image display interface, a three-dimensional virtual image display posture selection control corresponding to a first virtual object is arranged;
a pose determination module to determine a new pose of the first virtual object in response to an operation of the pose selection control;
and the posture updating module is used for updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new posture of the first virtual object.
In one possible implementation, the apparatus further includes:
and the first display module is used for synchronizing the three-dimensional virtual image of the first virtual object with the updated posture to the terminals corresponding to other virtual objects in the target team for display.
In one possible implementation, in response to the team avatar display interface being a team avatar display interface prior to the start of the virtual scene, the apparatus further comprises:
displaying an appearance selection control corresponding to the three-dimensional virtual image of the first virtual object in a team image display interface;
an appearance determination module to determine a new appearance of the first virtual object in response to an operation of the appearance selection control;
and the appearance updating module is used for updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new appearance of the first virtual object.
In one possible implementation, the apparatus further includes:
and the second display module is used for synchronizing the three-dimensional virtual image of the updated first virtual object to terminals corresponding to other virtual objects in the target team for display.
In one possible implementation, in response to the team avatar presentation interface being a settlement interface after the virtual scene ends, the apparatus further comprises:
displaying an interactive control corresponding to the three-dimensional virtual image of the second virtual object in the team image display interface; the second virtual object is a virtual object other than the first virtual object in the target team;
and the execution module is used for responding to the triggering operation of the interactive control and executing the interactive operation with the second virtual object.
In one possible implementation, the apparatus further includes:
and the information display module is used for responding to the received interactive operation of the second virtual object and displaying the interactive information corresponding to the interactive operation corresponding to the three-dimensional virtual image of the second virtual object.
In one possible implementation, the apparatus further includes:
the gesture obtaining module is used for responding to the received continuous touch operation of the three-dimensional virtual image based on the first virtual object and obtaining at least two gestures of the first virtual object;
and the continuous updating module is used for sequentially updating the three-dimensional virtual images of the first virtual object corresponding to the at least two postures in the team image display interface according to the appointed posture sequence.
To sum up, the device for displaying the virtual images of the virtual objects provided by the embodiment of the application displays the three-dimensional virtual images of all the virtual objects in the team image display interface of the terminal, so that a user can simply and directly acquire the virtual object selected by other users in the same team through the virtual image of the virtual object in the terminal team image display interface, the judgment time of the user on the virtual object in the team forming process is reduced, the team forming efficiency is improved, and the interface display effect is also improved.
Referring to fig. 18, a block diagram of an avatar display apparatus for a virtual object according to an exemplary embodiment of the present application is shown, which can be implemented as a part of a terminal in software or hardware or a combination of software and hardware to perform all or part of the steps of the method shown in fig. 16. The avatar display apparatus of the virtual object may include:
the display module 1810 is used for displaying a team image display interface, wherein the team image display interface comprises three-dimensional virtual images of all virtual objects in a target team;
the updating module 1820 is configured to update the three-dimensional avatar of the first virtual object in the team avatar display interface according to the posture information in response to a change of the posture information corresponding to the first virtual object, where the posture information is used to indicate the posture of each virtual object in the target team, and the avatar of the target team is any avatar in the target team.
To sum up, the device for displaying the three-dimensional images of the virtual objects provided by the embodiment of the application displays the three-dimensional images of the virtual objects in the team image display interface of the terminal and updates the three-dimensional images of the virtual objects in real time, so that a user can simply and directly acquire the virtual objects selected by other users in the same team through the virtual images of the virtual objects in the terminal team image display interface, thereby reducing the judgment time of the users on the virtual objects in the team forming process, improving the team forming efficiency and simultaneously improving the interface display effect.
FIG. 19 is a block diagram illustrating the architecture of a computer device 1900 according to an example embodiment. The computer device 1900 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Computer device 1900 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in the memory 1902 is used to store at least one instruction for execution by the processor 1901 to implement the leaderboard display method provided by method embodiments herein.
In some embodiments, computer device 1900 may also optionally include: a peripheral device interface 1903 and at least one peripheral device. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a touch display 1905, a camera 1906, an audio circuit 1907, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, providing the front panel of computer device 1900; in other embodiments, display 1905 may be at least two, each disposed on a different surface of computer device 1900 or in a folded design; in still other embodiments, display 1905 may be a flexible display disposed on a curved surface or on a folding surface of computer device 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, the camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing or inputting the electric signals into the radio frequency circuit 1904 to achieve voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
Power supply 1909 is used to supply power to the various components in computer device 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: an acceleration sensor 1911, a gyro sensor 1912, a pressure sensor 1913, an optical sensor 1915, and a proximity sensor 1916.
The acceleration sensor 1911 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may acquire a 3D motion of the user on the computer device 1900 in cooperation with the acceleration sensor 1911. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
Pressure sensors 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is provided on the side frame of the computer device 1900, the user can detect a holding signal for the computer device 1900, and the processor 1901 can perform right-left hand recognition or quick operation based on the holding signal acquired by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera head 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front panel of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the touch display 1905 is controlled by the processor 1901 to switch from a bright screen state to a dark screen state when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually becomes larger, the touch display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture illustrated in FIG. 19 does not constitute a limitation of computer device 1900, and may include more or fewer components than those illustrated, or may combine certain components, or may employ a different arrangement of components.
Fig. 20 is a block diagram illustrating the structure of a computer device 2000, according to an example embodiment. The computer device may be implemented as a server in the above-described aspects of the present disclosure. The computer device 2000 includes a Central Processing Unit (CPU) 2001, a system Memory 2004 including a Random Access Memory (RAM) 2002 and a Read-Only Memory (ROM) 2003, and a system bus 2005 connecting the system Memory 2004 and the central processing unit 2001. The computer device 2000 also includes a basic input/output system (I/O system) 2006 to facilitate transfer of information between devices within the computer, and a mass storage device 2007 for storing an operating system 2013, application programs 2014, and other program modules 2015.
The basic input/output system 2006 includes a display 2008 for displaying information and an input device 2009 such as a mouse, keyboard, etc. for a user to input information. Wherein the display 2008 and the input device 2009 are coupled to the central processing unit 2001 through an input-output controller 2010 coupled to the system bus 2005. The basic input/output system 2006 may also include an input/output controller 2010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 2010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 2007 is connected to the central processing unit 2001 through a mass storage controller (not shown) connected to the system bus 2005. The mass storage device 2007 and its associated computer-readable media provide non-volatile storage for the computer device 2000. That is, the mass storage device 2007 may include a computer-readable medium (not shown) such as a hard disk or a Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable Programmable Read-Only Memory (EPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 2004 and mass storage device 2007 described above may be collectively referred to as memory.
The computer device 2000 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 2000 may be connected to the network 2012 through a network interface unit 2011 that is coupled to the system bus 2005, or the network interface unit 2011 may be utilized to connect to other types of networks and remote computer systems (not shown).
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 2001 implements all or part of the steps of the method shown in the embodiment of fig. 5, 6, 12 or 15 by executing the one or more programs.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in embodiments of the disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiment of the present disclosure further provides a computer-readable storage medium for storing computer software instructions for the computer device, which includes a program designed to execute the avatar display method for the virtual object. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method shown in any of the embodiments of fig. 5, 6, 12 or 15 described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (13)

1. A method for displaying an avatar of a virtual object, the method comprising:
responding to a team image display instruction, acquiring posture information, wherein the posture information is used for indicating respective postures of each virtual object in a target team, each virtual object corresponds to at least two postures, the at least two postures comprise default postures used for indicating model display animations of the initial virtual object, and the at least two postures further comprise other postures different from the default postures;
generating a three-dimensional avatar of the respective virtual object based on the pose information;
acquiring the arrangement sequence of each virtual object in the target team;
determining the positions of the three-dimensional avatars of the virtual objects in a target scene respectively based on the arrangement sequence of the virtual objects in the target team, wherein the target scene is a three-dimensional scene provided with the three-dimensional avatars of the virtual objects;
displaying a team image display interface in a first terminal corresponding to a first virtual object, wherein the target scene is displayed in the team image display interface, the team image display interface comprises three-dimensional virtual images of all the virtual objects, and the team image display interface comprises a team image display interface before the virtual scene begins or a settlement interface after the virtual scene ends; the first virtual object is any virtual object in the target team;
updating the three-dimensional avatar of the first virtual object in the team avatar display interface in response to the change in the pose of the first virtual object;
and synchronizing the three-dimensional virtual image of the first virtual object with the updated posture to terminals corresponding to other virtual objects in the target team for displaying.
2. The method according to claim 1, wherein the displaying of the team avatar display interface in the first terminal corresponding to the first virtual object comprises:
and displaying the team image display interface comprising a target scene picture in the first terminal, wherein the target scene picture is a picture for observing the target scene at a specified view angle.
3. The method according to claim 2, wherein the determining the positions of the three-dimensional avatars of the virtual objects in the target scene based on the arrangement order of the virtual objects in the target team comprises:
setting a three-dimensional avatar of the first virtual object to be located at an intermediate position in the target scene;
and determining the position of a three-dimensional virtual image of a second virtual object in the target team, wherein the position of the three-dimensional virtual image of the second virtual object is positioned in the target scene according to the relation between the arrangement sequence of the first virtual object in the target team and the arrangement sequence of the second virtual object in the target team, and the second virtual object is other than the first virtual object in the target team.
4. The method of claim 1, wherein in response to the team character presentation interface being a team character presentation interface prior to a start of a virtual scene, updating the three-dimensional character of the first virtual object in the team character presentation interface in response to the pose of the first virtual object changing comprises:
in the team image display interface, a three-dimensional virtual image display posture selection control corresponding to the first virtual object is arranged;
determining a new pose of the first virtual object in response to operation of the pose selection control;
and updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new posture of the first virtual object.
5. The method of claim 1, wherein in response to the team avatar rendering interface being a team avatar rendering interface prior to a start of a virtual scene, the method further comprises:
displaying an appearance selection control corresponding to the three-dimensional virtual image of the first virtual object in the team image display interface;
determining a new appearance of the first virtual object in response to operation of the appearance selection control;
and updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the new appearance of the first virtual object.
6. The method of claim 5, further comprising:
and synchronizing the three-dimensional virtual image of the updated appearance of the first virtual object to terminals corresponding to other virtual objects in the target team for displaying.
7. The method of claim 1, wherein in response to the team character presentation interface being a settlement interface after the end of the virtual scene, the method further comprises:
displaying an interactive control corresponding to the three-dimensional virtual image of the second virtual object in the team image display interface; the second virtual object is a virtual object other than the first virtual object in the target team;
and responding to the triggering operation of the interactive control, and executing the interactive operation with the second virtual object.
8. The method of claim 7, further comprising:
and responding to the received interactive operation of the second virtual object, and displaying interactive information corresponding to the interactive operation by a three-dimensional virtual image corresponding to the second virtual object.
9. The method of claim 1, further comprising:
acquiring at least two postures of the first virtual object in response to receiving continuous touch operation of a three-dimensional virtual image based on the first virtual object;
and sequentially updating the three-dimensional virtual images of the first virtual object corresponding to the at least two postures in the team image display interface according to the designated posture sequence.
10. A method for displaying an avatar of a virtual object, the method comprising:
acquiring the arrangement sequence of each virtual object in a target team in the target team;
determining the positions of the three-dimensional avatars of the virtual objects in a target scene respectively based on the arrangement sequence of the virtual objects in the target team, wherein the target scene is a three-dimensional scene provided with the three-dimensional avatars of the virtual objects;
displaying a team image display interface, wherein the team image display interface comprises three-dimensional virtual images of all the virtual objects, the target scene is displayed in the team image display interface, and the team image display interface comprises a team image display interface before the virtual scene begins or a settlement interface after the virtual scene ends;
in response to the change of the posture information corresponding to a first virtual object, updating the three-dimensional virtual image of the first virtual object in the team image display interface according to the posture information, wherein the first virtual object is any virtual image in the target team, the posture information is used for indicating the posture of each virtual object in the target team, and the three-dimensional virtual image of the first virtual object after the posture is updated is synchronized to terminals corresponding to other virtual objects in the target team for display.
11. An avatar display apparatus for a virtual object, the apparatus comprising:
the system comprises a posture information acquisition module, a display module and a display module, wherein the posture information acquisition module is used for responding to a team image display instruction and acquiring posture information, the posture information is used for indicating the respective posture of each virtual object in a target team, each virtual object corresponds to at least two postures, the at least two postures comprise default postures used for indicating model display animations of initial virtual objects, and the at least two postures further comprise other postures different from the default postures;
a generating module for generating a three-dimensional avatar of each virtual object based on the pose information;
a ranking order obtaining module, configured to obtain a ranking order of each virtual object in the target team;
the position determining module is used for determining the positions of the three-dimensional avatars of the virtual objects in a target scene respectively based on the arrangement sequence of the virtual objects in the target team, and the target scene is a three-dimensional scene provided with the three-dimensional avatars of the virtual objects;
the display module is used for displaying a team image display interface in a first terminal corresponding to the first virtual object, the target scene is displayed in the team image display interface, the team image display interface comprises three-dimensional virtual images of all the virtual objects, and the team image display interface comprises a team image display interface before the virtual scene starts or a settlement interface after the virtual scene ends; the first virtual object is any virtual object in the target team;
the display module is further used for responding to the change of the posture of the first virtual object and updating the three-dimensional virtual image of the first virtual object in the team image display interface;
and the second display module is used for synchronizing the three-dimensional virtual image of the first virtual object with the updated posture to terminals corresponding to other virtual objects in the target team for display.
12. A computer device comprising a processor and a memory, wherein at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to implement the avatar rendering method for a virtual object according to any of claims 1-10.
13. A computer-readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is loaded and executed by a processor to implement the avatar presentation method of a virtual object according to any one of claims 1 to 10.
CN202010241840.2A 2020-03-31 2020-03-31 Virtual image display method, device, equipment and storage medium of virtual object Active CN111462307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241840.2A CN111462307B (en) 2020-03-31 2020-03-31 Virtual image display method, device, equipment and storage medium of virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241840.2A CN111462307B (en) 2020-03-31 2020-03-31 Virtual image display method, device, equipment and storage medium of virtual object

Publications (2)

Publication Number Publication Date
CN111462307A CN111462307A (en) 2020-07-28
CN111462307B true CN111462307B (en) 2022-11-29

Family

ID=71682982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241840.2A Active CN111462307B (en) 2020-03-31 2020-03-31 Virtual image display method, device, equipment and storage medium of virtual object

Country Status (1)

Country Link
CN (1) CN111462307B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112156464B (en) * 2020-10-22 2023-03-14 腾讯科技(深圳)有限公司 Two-dimensional image display method, device and equipment of virtual object and storage medium
CN112634416B (en) * 2020-12-23 2023-07-28 北京达佳互联信息技术有限公司 Method and device for generating virtual image model, electronic equipment and storage medium
CN112755529B (en) * 2021-01-22 2023-10-24 北京字跳网络技术有限公司 Scene data updating method and device and computer storage medium
CN112891939B (en) * 2021-03-12 2022-11-25 腾讯科技(深圳)有限公司 Contact information display method and device, computer equipment and storage medium
CN113096224A (en) * 2021-04-01 2021-07-09 游艺星际(北京)科技有限公司 Three-dimensional virtual image generation method and device
CN113181645A (en) * 2021-05-28 2021-07-30 腾讯科技(成都)有限公司 Special effect display method and device, electronic equipment and storage medium
CN113332717A (en) * 2021-06-11 2021-09-03 网易(杭州)网络有限公司 Game equipment display method and device, electronic equipment and storage medium
CN113599826A (en) * 2021-08-16 2021-11-05 北京字跳网络技术有限公司 Virtual character display method and device, computer equipment and storage medium
CN113641443B (en) * 2021-08-31 2023-10-24 腾讯科技(深圳)有限公司 Interface element display method, device, equipment and readable storage medium
CN114082189A (en) * 2021-11-18 2022-02-25 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment, storage medium and product
CN114245099B (en) * 2021-12-13 2023-02-21 北京百度网讯科技有限公司 Video generation method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4589938B2 (en) * 2007-03-30 2010-12-01 株式会社コナミデジタルエンタテインメント GAME PROGRAM, GAME DEVICE, AND GAME CONTROL METHOD
JP5486730B1 (en) * 2013-12-11 2014-05-07 株式会社 ディー・エヌ・エー Game management server device
CN104575150B (en) * 2015-01-15 2016-11-09 广东电网有限责任公司教育培训评价中心 The method and apparatus of many people online cooperation and system for electric analog training
CN108888959B (en) * 2018-06-27 2020-06-30 腾讯科技(深圳)有限公司 Team forming method and device in virtual scene, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111462307A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462307B (en) Virtual image display method, device, equipment and storage medium of virtual object
JP7395600B2 (en) Presentation information transmission method, presentation information display method, presentation information transmission device, presentation information display device, terminal, and computer program for multiplayer online battle program
US11413528B2 (en) Method, apparatus, and device for displaying skin of virtual character
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN112717421B (en) Team matching method, team matching device, team matching terminal, team matching server and storage medium
CN111672099B (en) Information display method, device, equipment and storage medium in virtual scene
CN111760278B (en) Skill control display method, device, equipment and medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN112870705B (en) Method, device, equipment and medium for displaying game settlement interface
CN111672104B (en) Virtual scene display method, device, terminal and storage medium
CN111672131B (en) Virtual article acquisition method, device, terminal and storage medium
CN113117331B (en) Message sending method, device, terminal and medium in multi-person online battle program
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN112569607B (en) Display method, device, equipment and medium for pre-purchased prop
WO2023134272A1 (en) Field-of-view picture display method and apparatus, and device
WO2023016089A1 (en) Method and apparatus for displaying prompt information, device, and storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN113559495B (en) Method, device, equipment and storage medium for releasing skill of virtual object
KR20230042517A (en) Contact information display method, apparatus and electronic device, computer-readable storage medium, and computer program product
CN113101656B (en) Virtual object control method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025828

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant