CN110102053B - Virtual image display method, device, terminal and storage medium - Google Patents

Virtual image display method, device, terminal and storage medium Download PDF

Info

Publication number
CN110102053B
CN110102053B CN201910394200.2A CN201910394200A CN110102053B CN 110102053 B CN110102053 B CN 110102053B CN 201910394200 A CN201910394200 A CN 201910394200A CN 110102053 B CN110102053 B CN 110102053B
Authority
CN
China
Prior art keywords
target
virtual
user
interaction
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910394200.2A
Other languages
Chinese (zh)
Other versions
CN110102053A (en
Inventor
魏嘉城
宋晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910394200.2A priority Critical patent/CN110102053B/en
Publication of CN110102053A publication Critical patent/CN110102053A/en
Application granted granted Critical
Publication of CN110102053B publication Critical patent/CN110102053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Abstract

The invention discloses a virtual image display method, a virtual image display device, a terminal and a storage medium, and belongs to the technical field of networks. According to the method and the device, the interaction process of the at least one virtual image based on the first target action is displayed on the at least one first target area, the first target action can be executed through the virtual image in the virtual interaction scene, the interaction process among all virtual resources is simulated, the interaction process is vividly and vividly expressed, the interestingness of the interaction process is improved, and the intuitiveness of the interaction process in the display process is improved.

Description

Virtual image display method, device, terminal and storage medium
Technical Field
The present invention relates to the field of network technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for displaying an avatar.
Background
With the development of network technology, a game client can be installed on a terminal, so that a user can conveniently participate in an interactive game based on the game client, for example, the interactive game can be a chess and card type interactive game, a card and card type interactive game and the like.
At present, in interactive games such as chess and cards, users usually check interaction state values (such as blood volume, experience, grade and the like) corresponding to the users directly in setting options, so that the interaction process of the interactive games is not visual enough when displayed, and the interest of game interaction is reduced.
Disclosure of Invention
The embodiment of the invention provides a virtual image display method, a virtual image display device, a virtual image display terminal and a virtual image display storage medium, and can solve the problems of low game interaction interestingness and non-visual interaction process display. The technical scheme is as follows:
in one aspect, there is provided an avatar display method, the method including:
displaying at least one virtual resource controlled by at least one user in a virtual interactive scene;
acquiring at least one target position of the at least one virtual resource in the virtual interaction scene and a first target action of at least one virtual image, wherein the at least one virtual image corresponds to the at least one user, and the first target action is used for simulating the interaction between the at least one virtual resource;
and displaying the at least one virtual resource on the at least one target position, and displaying the interaction process of the at least one avatar based on the first target action on at least one first target area.
In one aspect, there is provided an avatar display apparatus, the apparatus including:
the display module is used for displaying at least one virtual resource controlled by at least one user in a virtual interactive scene;
the acquisition module is used for acquiring at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual image, wherein the at least one virtual image corresponds to the at least one user, and the first target action is used for simulating the interaction between the at least one virtual resource;
the display module is further configured to display the at least one virtual resource on the at least one target location, and display an interaction process of the at least one avatar based on the first target action on the at least one first target area.
In one possible implementation, the obtaining module is configured to:
receiving frame synchronization data at intervals of a target time length, wherein the frame synchronization data comprise at least one piece of interaction data, and each piece of interaction data corresponds to the interaction operation of a user;
determining the at least one target position and a first target action of the at least one avatar based on the frame synchronization data.
In one possible embodiment, the apparatus further comprises:
the sending module is used for sending interactive data corresponding to the interactive operation when the interactive operation of a target user is detected, wherein the target user is a user corresponding to the terminal.
In one possible implementation, the sending module is configured to:
when the control operation of the target user on the control resource corresponding to the target user is detected, sending control data corresponding to the control operation; or the like, or, alternatively,
when experience improvement operation of the target user on the virtual image corresponding to the target user is detected, experience improvement data corresponding to the experience improvement operation are sent; or the like, or, alternatively,
and when the grade promotion operation of the target user on the virtual image corresponding to the target user is detected, sending grade promotion data corresponding to the grade promotion operation.
In one possible implementation, the display module is configured to:
when experience of any virtual image is improved, displaying a first target animation based on the virtual image, wherein the first target animation is used for representing the experience improvement of the virtual image; or the like, or, alternatively,
when the grade of any virtual image is raised, a second target animation is displayed based on the virtual image, and the second target animation is used for representing the grade raising of the virtual image.
In one possible embodiment, the first target animation is an avatar that is enlarged in size; or, the first target animation swallows the virtual food for the avatar;
the second target animation is the number of rotating target turns of the virtual image; or, the second target animation is a color change of the avatar; or, the second target animation is the outline blurring of the virtual image; or the second target animation is to display the virtual object on the second target area corresponding to the virtual image.
In one possible embodiment, the apparatus further comprises:
and the determining module is used for determining a second target action of the at least one virtual image according to the interaction result between the at least one virtual resource, and the second target action is used for simulating the interaction result between the at least one virtual resource.
In one possible embodiment, the determining module is configured to:
when the interaction result of any virtual resource is a winner, determining a second target movement of an avatar corresponding to the user to which the virtual resource belongs as at least one of flying, jumping or dancing;
and when the interaction result of any virtual resource is failure, determining a second target action of the virtual image corresponding to the user to which the virtual resource belongs as at least one of crying, falling down or death.
In one possible embodiment, the apparatus further comprises:
and displaying the interaction attribute value of the at least one user on a third target area corresponding to the at least one virtual character.
In one possible embodiment, the apparatus further comprises:
when the clicking operation of a target user on an avatar corresponding to the target user is detected, displaying at least one session message in a fourth target area corresponding to the avatar, wherein the target user is a user corresponding to a terminal;
and when the click operation of the target user on any conversation message in the at least one conversation message is detected, sending the conversation message.
In one aspect, a terminal is provided and includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories and loaded by the one or more processors and executed to implement the operations performed by the avatar display method according to any of the above possible implementations.
In one aspect, a storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the avatar display method according to any one of the above possible implementations.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
by displaying at least one virtual resource controlled by at least one user, a virtual multimedia environment displaying an interactive process and virtual resources operable by the user in a virtual interactive scene, acquiring at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual image corresponding to the at least one user, the first target action being used for simulating the interaction between the at least one virtual resource, thereby not only acquiring the position change of the virtual resource in the interactive process, but also acquiring a first target action simulating the interaction with the at least one virtual image, displaying the at least one virtual resource on the at least one target position, displaying the interactive process of the at least one virtual image based on the first target action on at least one first target area, therefore, the first target action can be executed through the virtual image in the virtual interactive scene, the interactive process among all virtual resources is simulated, the interactive process is vividly and vividly represented, the interestingness of the interactive process is improved, and the intuitiveness of the interactive process in the display process is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of an environment for displaying an avatar according to an embodiment of the present invention;
FIG. 2 is a flowchart of an avatar display method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a frame synchronization technique according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a frame synchronization technique according to an embodiment of the present invention;
FIG. 5 is a diagram of a first target animation according to an embodiment of the invention;
FIG. 6 is a diagram of a second target animation according to an embodiment of the invention;
fig. 7 is a schematic interface diagram of a virtual interactive scene according to an embodiment of the present invention;
FIG. 8 is a schematic interface diagram of a virtual interactive scene according to an embodiment of the present invention;
fig. 9 is a schematic interface diagram of a virtual interactive scene according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a frame synchronization technique according to an embodiment of the present invention;
fig. 11 is a schematic interface diagram of a virtual interactive scene according to an embodiment of the present invention;
FIG. 12 is a diagram illustrating a second target action according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a second target action according to an embodiment of the present invention;
fig. 14 is a schematic structural view of an avatar display apparatus according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of an avatar display method according to an embodiment of the present invention. Referring to fig. 1, at least one terminal 101 and a server 102 are included in the implementation environment, as described in more detail below:
the at least one terminal 101 is configured to display a virtual interactive scene, where the at least one terminal 101 may have an android operating system or an IOS operating system, and an application client may be installed on the at least one terminal 101, so that the at least one terminal 101 can display the virtual interactive scene based on the application client, for example, the application client may be a game client.
The server 102 is configured to provide a virtual interaction scenario for the at least one terminal 101, and display a virtual resource controlled by a user corresponding to each terminal and a virtual image corresponding to each user in the virtual interaction scenario, where the server 102 may be any computer device, for example, the server 102 may be a PC (personal computer) server configured with a 64-bit Linux operating system and using an 8-core 16G chip.
In the above process, the virtual interactive scene may be a virtual interactive environment generated by the server, the virtual interactive scene is used to provide a multimedia virtual environment, after the terminal loads the virtual interactive scene from the server, the user may control an operable virtual resource, an avatar, and the like in the virtual interactive scene through an operation interface of the terminal, optionally, the virtual resource is used to perform interaction between users, for example, the virtual resource may be a chess, a card, and the like, optionally, the avatar is used to simulate interaction between virtual resources through actions, for example, the avatar may be a virtual pet, a virtual cartoon character, and the like.
In some embodiments, when a plurality of users participate in the game interaction, multiple parties may be in a competitive relationship with each other, and certainly, one or more users may form a team, and each team has a competitive relationship with each other until a winning or losing result is found out.
Based on the above example, in the first interactive turn of the chess game, a plurality of users may be located at different initial positions of the chessboard, and in each subsequent interactive turn, each user may obtain virtual resources (i.e., chess pieces) from the chess piece pool, and further form different chess piece combinations by the obtained chess pieces, so that the different chess piece combinations may have different interactive attributes, for example, the chess piece combination a is an attack chess group, and the chess piece combination B is a defense chess group.
After a user has matched a pawn or a pawn combination, a match may be made with users other than the user (or server-simulated AI players, server-generated non-player characters, etc.) based on the pawn or the pawn combination, which may have different interaction properties in some embodiments when located at different positions on the board, and each may also enhance existing pawns or acquire new pawns, etc., as permitted by game rules during the match until a win or loss result is found.
However, in the current virtual interactive scene, the user usually directly checks the interactive attribute values (such as blood volume, experience, level, etc.) corresponding to the user in the setting options, so that the interactive process of the interactive game is not intuitive enough when displayed, and the interest of the game interaction is reduced.
Fig. 2 is a flowchart of an avatar display method according to an embodiment of the present invention. Referring to fig. 2, the method is applied to a terminal, and the embodiment includes:
201. and the terminal displays at least one virtual resource controlled by at least one user in the virtual interactive scene.
The virtual interactive scene can be used for providing a virtual multimedia interactive environment, and the terminal can load the virtual interactive scene from the server and display the virtual interactive scene.
Optionally, the at least one user may be a participant in an interactive process, e.g. the at least one user may be a player of an interactive game, which may be a chess or card game, etc.
Optionally, the at least one virtual resource is used for interaction between the at least one user, the at least one virtual resource may be an interactive resource of an interactive process, and one user may correspond to one or more virtual resources, for example, the virtual resource may be a piece of a board game, a card of a card game, or the like.
In step 201, the terminal may obtain the virtual interactive scene and the at least one virtual resource from the server, render the virtual interactive scene based on a GPU (graphics processing unit), and render the at least one virtual resource based on the virtual interactive scene, so that the at least one virtual resource is displayed in the virtual interactive scene.
In some embodiments, an application client may be installed on a terminal, an SDK (software development kit) of the application client includes configuration data of the virtual interactive scene, after a target user logs in the application client, when a start operation of an interactive game is detected, the terminal may render the configuration data of the virtual interactive scene based on a GPU, after the target user selects an initial virtual resource (or a server randomly issues the initial virtual resource), the terminal renders the virtual resource of the target user in the virtual interactive scene based on the GPU, and during a subsequent interaction process, the terminal may update and display content in the virtual interactive scene based on frame synchronization data issued by the server.
In some embodiments, after displaying the at least one virtual resource, the terminal may further provide a selection interface of the avatar, in which at least one selectable avatar is included, to the user, and when a selection operation of any avatar by the user is detected, the terminal transmits avatar selection data corresponding to the selection operation to the server, and displays the avatar selected by the user in the virtual interactive scene based on the frame synchronization data including the avatar selection data.
Optionally, at least one interactive round may be included in the interactive game, where the interactive round is used to indicate that one participant of the interactive game (that is, any user of the at least one user or an AI player simulated by the server) performs one round of control operation, and in each round of interactive round, the terminal may display, in the virtual scene, all virtual resources of a target user corresponding to the terminal and virtual resources "typed out" by users other than the target user, where "typed out" refers to setting any virtual resource to a state visible to any user other than the user corresponding to the virtual resource. It should be noted that, the embodiment of the present invention is described by taking any interaction round in the interaction process as an example, and should not be construed as a limitation that the embodiment of the present invention is only executed in the first interaction round of the interaction process.
202. And when the terminal detects the interactive operation of a target user, sending interactive data corresponding to the interactive operation, wherein the target user is a user corresponding to the terminal.
The terminal may be a terminal corresponding to any user of the at least one user, and thus the target user is also any user of the at least one user.
In the process, when the terminal detects the interactive operation, the interactive data is sent to the server, if the terminal does not detect the interactive operation, the interactive data does not need to be sent to the server, and meanwhile, the frame synchronization data can be received from the server every target time interval, so that the messages of all terminals participating in the interactive process can be kept synchronous based on a frame synchronization technology.
In some embodiments, the step 202 may be represented by any one or at least two of the following manners, which are described below, and the embodiment of the present invention is described by taking only one terminal as an example to send the interaction data, and the following manners, which are described below in detail, may be performed for a terminal corresponding to any user participating in the interaction process.
The first method is as follows: and when the terminal detects the control operation of the target user on the control resource corresponding to the target user, sending control data corresponding to the control operation.
In some embodiments, the number of virtual resources controlled by the target user may be any number greater than or equal to 0, e.g., the target user may of course have control of 2 virtual resources. Optionally, when the target user does not currently control any virtual resource, the control operation performed by the target user may be to acquire a new virtual resource, where the control operation may be to click on the chess piece pool first and then click on the new virtual resource, of course, the control operation may also be to click on a "random drawing" button of the chess piece, and the like, and of course, the control operation may also be to click on a purchase button of any virtual resource to purchase the virtual resource using virtual currency, and the like.
In some embodiments, the control operation may be to place the virtual resource controlled by the target user at any position in the virtual interactive scene, for example, the control operation may be to press a certain virtual resource for a long time and drag the virtual resource to any position in the virtual interactive scene, for example, the target user may press a certain chess piece for a long time and drag the chess piece to the lower left corner of the chessboard in the virtual interactive scene.
Based on the above situation, when the target user designates the placement position of the virtual resource in the next interaction round in a certain interaction round, and after the terminal displays the virtual resource at the position designated by the target user in the next interaction round, the virtual resource can be triggered to automatically launch attack or defense to any virtual resource except the virtual resource, so that the operation difficulty in the interaction process can be simplified.
Of course, in some embodiments, the attack or defense of the virtual resource may also be not automatically triggered by the placement position of the virtual resource, but manually controlled by each user, that is, the control operation may also be performed by a user target user controlling a certain virtual resource to initiate at least one of an attack operation or a defense operation to another virtual resource, where the another virtual resource may be a virtual resource of the target user, may be a virtual resource of any user other than the target user, may also be a virtual resource of an AI player simulated by the server, and may also be a non-player character (NPC) provided by the server for the terminal in the interaction process, where the embodiment of the present invention does not specifically limit the participant of the control operation, for example, the control operation may be to click an attack button of a certain virtual resource first, clicking on the receiver of the attack operation, etc. It should be noted that the control operation may be any control operation performed by a target user, and the embodiment of the present invention does not specifically limit the type of the control operation.
In the above process, when the terminal detects a control operation of a target user on a virtual resource controlled by the target user, control data is generated, where the control data is logical data of the control operation, for example, when the control operation is used to acquire a new virtual resource, the control data may include a virtual resource identifier to be acquired, when the control operation is used to place a virtual resource at any position, the control data may include screen coordinates of the virtual resource to be placed with respect to a terminal screen and a virtual resource identifier of the virtual resource to be placed, and when the control operation is used to launch an attack on another virtual resource, the control data may include an initiator virtual resource identifier and a recipient virtual resource identifier. After the terminal generates control data corresponding to the control operation, the control data may be transmitted to the server.
In the process, the control data are sent to the server when the terminal detects the control operation, so that the data volume for sending the control data at each time can be reduced, the synchronization efficiency among the terminals is improved, and the consistency display among the terminals in the interaction process is ensured.
The second method comprises the following steps: and when the terminal detects experience improvement operation of the target user on the virtual image corresponding to the target user, sending experience improvement data corresponding to the experience improvement operation.
The experience improvement operation may be that the target user clicks an experience improvement button of the avatar corresponding to the target user.
In the above process, when experience improvement operation of the target user on the avatar corresponding to the target user is detected, experience improvement data may be generated, where the experience improvement data may include a user identifier and an experience improvement value, and the terminal sends the experience improvement data to the server, so that it is convenient for the user to improve experience based on the avatar, and improve operation convenience of the interactive game, and when the experience value is accumulated to a target threshold, the following third mode may be performed.
The third method comprises the following steps: and when the terminal detects the grade promotion operation of the target user on the virtual image corresponding to the target user, sending grade promotion data corresponding to the grade promotion operation.
The level-up operation may be that the target user clicks a level-up button of the avatar corresponding to the target user.
In the process, when the level promotion operation of the target user on the virtual image corresponding to the target user is detected, level promotion data can be generated, the level promotion data can comprise the user identification and the level promotion value, and the terminal sends the level promotion data to the server, so that the user can conveniently promote the level based on the virtual image, and the operation convenience degree of the interactive game is promoted.
In some embodiments, the user may not need to perform the level-up operation manually, but may trigger the experience-up operation, and when an experience-up value included in a certain experience-up data is greater than or equal to a target threshold, the experience-up data may further include the level-up data, so as to further improve the operation convenience of the interactive game.
203. Every target time interval, the server obtains at least one piece of interactive data sent by a terminal corresponding to at least one user in the target time interval before the current time, generates frame synchronization data according to the at least one piece of interactive data, and sends the frame synchronization data to the terminal corresponding to the at least one user.
The target duration may be any value greater than 0, and optionally, the target duration may be a frame length of one network frame, and may have different frame lengths of network frames for different interactive games.
Wherein the interactive data may comprise at least one of control data, experience improvement data or level improvement data, and optionally the interactive data may further comprise a session message.
In the process, the server collects at least one piece of interactive data sent by each terminal at intervals of the target time length, summarizes the at least one piece of interactive data into an interactive data set, and broadcasts the interactive data set to each terminal, so that compared with a state synchronization technology, the interactive game based on the frame synchronization technology can simplify the processing logic of the server, and reduce the data volume of the frame synchronization data sent by the server every time.
In some embodiments, if a certain user does not perform an interactive operation within the target duration, the terminal corresponding to the user will not send interactive data, and if all terminals corresponding to the user within the target duration do not send interactive data, the server may generate null frame data having the same data format as that of the frame synchronization data, and send the null frame data to the terminal corresponding to the at least one user, where the null frame data is used to indicate that the virtual interactive scene displayed by each terminal does not change.
Fig. 3 is a schematic diagram of a frame synchronization technique according to an embodiment of the present invention, referring to fig. 3, a client on each terminal reports control data corresponding to a control operation of a user to a server, the server obtains the control data sent by each terminal in a fixed frame clock (target interval between adjacent frame clocks), generates frame synchronization data, and sends the frame synchronization data to each terminal, so that each terminal executes the frame synchronization data, that is, each terminal displays a virtual interaction scene based on the frame synchronization data, where N is any integer greater than or equal to 1.
Fig. 4 is a schematic diagram of a frame synchronization technique provided in an embodiment of the present invention, referring to fig. 4, a client on each terminal reports session attribute information (for example, experience enhancement data, level enhancement data, and other information related to an interaction attribute value in the embodiment of the present invention) to a server, the server obtains the session attribute information sent by each terminal in a fixed frame clock (target interval duration between adjacent frame clocks), generates frame synchronization data based on the session attribute information, sends the frame synchronization data to each terminal, enables each terminal to receive the frame synchronization data, updates the interaction attribute value of each avatar in a virtual interaction scene based on the frame synchronization data, and displays a latest interaction attribute value of each avatar, where N is any integer greater than or equal to 1.
204. And the terminal receives the frame synchronization data at intervals of the target time length, the frame synchronization data comprises at least one interactive data, and each interactive data corresponds to the interactive operation of one user.
In the step 204, the terminal receives the frame synchronization data sent by the server, and the process may be that when the terminal receives any data, the terminal detects an object field of the data, and when the object field includes a frame synchronization code, the data is determined as the frame synchronization data.
205. The terminal determines at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual character based on the frame synchronization data.
Wherein the at least one target location corresponds to the at least one virtual resource, and the at least one target location can be expressed as screen coordinates of the at least one virtual resource relative to a terminal screen.
Wherein the at least one avatar corresponds to the at least one user, each avatar being for simulating interaction between each virtual resource by action, e.g., the avatar may be a virtual pet, a virtual cartoon character, etc., wherein the first target action may be any simulated interaction action, e.g., the first target action may be an attacking action, an attacked action, a defending action, etc.
In the foregoing process, when the frame synchronization data includes control data, the terminal may obtain at least one piece of control data corresponding to each terminal in a target duration by parsing the frame synchronization data, input the at least one piece of control data into an objective function, determine a target position of a virtual resource whose position needs to be modified according to the objective function, and determine a first target action of the at least one avatar according to the objective function, where different rendering engines on the terminal may have different objective functions, for example, the objective function may be an UpdateByNet () function.
In the above process, the terminal obtains at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one avatar, the at least one avatar corresponds to the at least one user, the first target action is used for simulating interaction between the at least one virtual resource so as to display each virtual resource and each avatar, and further, only the target position of the virtual resource of which the position needs to be modified can be obtained through an objective function, so that re-rendering of all the virtual resources can be avoided in a subsequent rendering process, the processing resource of the terminal is saved, and the processing efficiency of the terminal is improved.
In some embodiments, after a target user controls a certain virtual resource to be placed at a target position in a virtual interaction scene, the virtual resource is automatically triggered to launch an attack to one or more virtual resources closest to the target position, at this time, the terminal resolves control data that the virtual resource is placed at the target position, a first target action of an avatar corresponding to the target user may be determined as an attack action, and a first target action of one or more avatars corresponding to one or more users to which the one or more virtual resources belong may be determined as an attacked action.
In some embodiments, when the frame synchronization data includes experience improvement data, the terminal may obtain at least one experience improvement data corresponding to each terminal within a target duration by analyzing the frame synchronization data, obtain an experience improvement value corresponding to the at least one user based on the at least one experience improvement data, and display a first target animation of the at least one avatar, so as to update the experience attribute value of each user in the virtual interaction scene, display the updated experience attribute value, and display the first target animation based on the at least one avatar.
Wherein the first target animation is used for representing experience improvement of the virtual character, optionally, the first target animation is that the size of the virtual character becomes larger, and certainly, the first target animation can also swallow virtual food for the virtual character. Fig. 5 is a schematic diagram of a first target animation provided by an embodiment of the present invention, and referring to fig. 5, the first target animation is an avatar whose size is increased.
In the process, when the experience of any virtual image is improved, the terminal displays the first target animation based on the virtual image according to the frame synchronization technology, so that the opposite attribute information in the interaction process can be more vividly and vividly expressed in the form of animation, and the interest of the interaction process is further improved.
In some embodiments, when the frame synchronization data includes level promotion data, the terminal may obtain at least one level promotion data corresponding to each terminal within a target duration by parsing the frame synchronization data, obtain a level promotion value corresponding to the at least one user based on the at least one level promotion data, and a second target animation of the at least one avatar, thereby updating a level attribute value of each user in the virtual interaction scene, displaying the updated level attribute value, and displaying the second target animation based on the at least one avatar.
Wherein the second target animation is used for representing the grade increase of the virtual image, optionally, the second target animation may be a target number of revolutions of the virtual image, the second target animation may be a color change of the virtual image, the second target animation may be an outline blurring of the virtual image, and the second target animation may also be displaying a virtual article on a second target area corresponding to the virtual image, for example, the virtual article may be a flower, a colored ribbon, a horn, and the like. Wherein the second target area may be any area corresponding to the avatar, for example, the second target area may be above the avatar.
Fig. 6 is a schematic diagram of a second target animation according to an embodiment of the present invention, referring to fig. 6, the second target animation is an avatar with a target number of rotations, and since the avatar is a dragon pet, the effect of the dragon pet circling around the body can be simulated to increase the ornamental value of the second target animation.
In the process, when the grade of any virtual image is improved, the terminal displays the second target animation based on the virtual image according to the frame synchronization technology, so that the opposite attribute information in the interactive process can be more vividly and vividly expressed in the form of animation, and the interest of the interactive process is further improved.
206. The terminal displays the at least one virtual resource on the at least one target position, and displays the interaction process of the at least one avatar based on the first target action on at least one first target area.
In the above process, the first target area may be any area in the virtual interactive scene, and different avatars may be located in different first target areas, for example, when only two users participate in the interaction, the avatar of the target user may be located in a lower left corner of the virtual interactive scene, and the avatar of another user may be located in an upper right corner of the virtual interactive scene.
In the above process, the server may call an objective function through a rendering engine, render a virtual resource of a location to be modified based on the objective function, and display the virtual resource of the location to be modified on a corresponding objective location, optionally, display a moving animation of the virtual resource moving from an original location to the objective location in the virtual interaction scene, for example, when the virtual resource is a virtual soldier, the moving animation may be an animation of the virtual soldier walking along a moving trajectory.
In the process, the server can call the target function through the rendering engine, and render at least one virtual image executing the first target action based on the target function, so that the interaction process among the virtual resources can be simulated through the first target action of the virtual image in the virtual interaction scene while the user controls the virtual resources to interact.
Fig. 7 is an interface schematic diagram of a virtual interaction scene according to an embodiment of the present invention, referring to fig. 7, in step 206, the server not only displays the at least one virtual resource in the virtual interaction scene, but also displays the at least one avatar in the virtual interaction scene, and the interaction between the at least one virtual resource is simulated by the at least one avatar, so as to increase the interest of the interaction process and make the interaction process of the interactive game more intuitive. On the other hand, the self will of the user can be reflected through the virtual image, the control operation of the user on the virtual resource is mapped to the first target action of the virtual image, the control operation of the user is vividly displayed in the virtual interaction scene through the first target action of the virtual image, and the virtual image which is used for trusting emotion in the interaction process can be added for the user.
For another example, fig. 8 is an interface schematic diagram of a virtual interaction scene provided by an embodiment of the present invention, referring to fig. 8, an avatar of a target user may be a dragon-shaped pet at a lower left corner, and an avatar of another user interacting with the target user is a dragon-shaped pet at an upper right corner, and the another user directly launches an attack to the target user through virtual resources in a certain interaction turn, at this time, the avatar of the target user determines an "attacked action" as a first target action, so that the dragon-shaped pet at the lower left corner presents the attacked action.
In some embodiments, after the terminal displays the at least one avatar in the virtual interactive scene, the terminal may further display the interactive attribute value of the at least one user on a third target area corresponding to the at least one virtual object, and compared with a method for viewing the interactive attribute value in an interface where the user switches from the virtual interactive scene to a setting option, the interactive attribute value is displayed based on the avatar, so that the interactive attribute value can be viewed more intuitively, and the operability of the interactive game is improved.
Wherein the third target area may be any area associated with the virtual object, e.g., the third target area may be the lower right corner of the virtual object.
Optionally, the interaction attribute value is used to represent the state of each attribute of the user during the interaction process, for example, the interaction attribute value may be blood volume, empirical value, rank value, etc., and in some embodiments, the interaction attribute value may also be the probability of the user acquiring different rare virtual resources.
For example, fig. 9 is an interface schematic diagram of a virtual interaction scene provided by an embodiment of the present invention, and referring to fig. 9, the avatar of the target user may be a dragon-shaped pet at the lower left corner, the experience and the current blood volume of the current level of the target user are respectively displayed in the lower right corner of the dragon-shaped pet in the form of a strip, and the probability that the user acquires different rare virtual resources is displayed in the form of a floating layer at the right side of the dragon-shaped pet.
In some embodiments, when the terminal detects the click operation of the target user on the at least one virtual object, at least one session message may be displayed in a fourth target area corresponding to the at least one virtual object, and the session message is displayed based on the avatar, thereby facilitating the sending operation of the session message, improving the efficiency of mutual communication in the interactive process, and improving the operability of the interactive game.
The fourth target area may be any area associated with the virtual object, and the fourth target area may overlap with the third target area or may not overlap with the third target area.
Alternatively, the conversation message may be used to represent the mood of the user during the interaction process, and the conversation message may be displayed in the form of an abbreviation or emoticon.
In the above process, fig. 10 is a schematic diagram of a frame synchronization technique according to an embodiment of the present invention, referring to fig. 10, when a terminal detects a click operation of a target user on any session message, the session message may be sent to a server through a client on the terminal, so that the server may generate frame synchronization data according to control data, experience improvement data, level improvement data, or session message sent by each terminal within a target duration based on the frame synchronization technique, send the frame synchronization data to each terminal, parse the frame synchronization data after the terminal receives the frame synchronization data, when the frame synchronization data includes the session message, the terminal may display the session message in a virtual interactive scene based on a virtual image of a sender of the session message, and optionally, the terminal may also display an emoticon corresponding to the session message, of course, the terminal can also play audio corresponding to the session message, so that synchronous display of the session message on each terminal can be realized based on the frame synchronization technology.
For example, FIG. 11 is an interface schematic diagram of a virtual interactive scene provided by an embodiment of the present invention, referring to FIG. 11, the virtual image of the target user may be a dragon shaped pet at the lower left corner, a conversation message such as "greeting", "mad", "cheer", "jezery", "happy", "surprise", etc. is displayed around the dragon shaped pet in the form of a dialog box, and when the terminal detects the click operation of the user on the "cheer" dialog box, the terminal may play "do not error! The interactive sound effect can also display the expression image of the thumb above the dragon-shaped pet, so that the interestingness of the interactive process can be increased.
207. And the terminal determines a second target action of the at least one virtual image according to the interaction result between the at least one virtual resource, wherein the second target action is used for simulating the interaction result between the at least one virtual resource.
The interaction result may be a win or a failure, and the terminal may still obtain the interaction result based on the frame synchronization data, and the specific process is similar to the case of obtaining the interaction data and the session message, which is not described herein again.
In some embodiments, when the interaction result of any virtual resource is a win, the terminal may determine that a second target motion of the avatar corresponding to the user to which the virtual resource belongs is at least one of flying, jumping, or dancing, so as to display an interaction process of the avatar based on the second target motion in the virtual interaction scene, where the display method is similar to the first target motion in step 206, and is not described herein again. For example, fig. 12 is a schematic diagram of a second target motion provided by the embodiment of the present invention, and referring to fig. 12, the second target motion may be a flight.
Of course, in some embodiments, when the interaction result of any virtual resource is failure, the terminal may determine that the second target action of the avatar corresponding to the user to which the virtual resource belongs is at least one of crying, falling, or death, so as to display the interaction process of the avatar based on the second target action in the virtual interaction scene, where the display method is similar to the first target action in step 206 described above, and details are not repeated here. For example, fig. 13 is a schematic diagram of a second target action provided by the embodiment of the present invention, and referring to fig. 13, the second target action may be a falling place.
In the process, the terminal can vividly simulate an interaction result in the virtual interaction scene based on the second target action of the virtual image, so that the interest of the interaction process is increased, and the visual information of the virtual interaction scene is enriched.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
The method provided by the embodiment of the invention comprises the steps of displaying at least one virtual resource controlled by at least one user, displaying a virtual multimedia environment of an interactive process and virtual resources operated by the user in a virtual interactive scene, acquiring at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual image, wherein the at least one virtual image corresponds to the at least one user, the first target action is used for simulating the interaction between the at least one virtual resource, so that not only can the position change of the virtual resource in the interactive process be acquired, but also the first target action simulating the interaction with the at least one virtual image can be acquired, the at least one virtual resource is displayed on the at least one target position, the interactive process of the at least one virtual image based on the first target action is displayed on at least one first target area, therefore, the first target action can be executed through the virtual image in the virtual interactive scene, the interactive process among all virtual resources is simulated, the interactive process is vividly and vividly represented, the interestingness of the interactive process is improved, and the intuitiveness of the interactive process in the display process is improved.
Further, when the terminal detects the interactive operation of the target user, the interactive data corresponding to the interactive operation is sent, and the message synchronization of each terminal participating in the interactive process can be kept based on a frame synchronization technology.
Optionally, when the terminal detects a control operation of the target user on the virtual resource controlled by the target user, control data is generated, the control data is logical data of the control operation, and the control data is sent to the server when the terminal detects the control operation, so that the data volume of sending the control data each time can be reduced, the synchronization efficiency among the terminals is improved, and the consistent display among the terminals in the interaction process is ensured.
Optionally, when experience improvement operation of the target user on the avatar corresponding to the target user is detected, experience improvement data may be generated, where the experience improvement data may include a user identifier and an experience improvement value, and the terminal sends the experience improvement data to the server, so that the experience of the user can be improved based on the avatar, and the operation convenience of the interactive game is improved.
Optionally, when the level promotion operation of the target user on the avatar corresponding to the target user is detected, level promotion data may be generated, the level promotion data may include a user identifier and a level promotion value, and the terminal sends the level promotion data to the server, so that the user can conveniently promote the level based on the avatar, and the operation convenience of the interactive game is promoted.
Furthermore, at least one interactive data sent by each terminal is collected at intervals of the target duration, the at least one interactive data is collected into an interactive data set, and the interactive data set is broadcasted to each terminal, so that compared with the state synchronization technology, the interactive game based on the frame synchronization technology can simplify the processing logic of the server, and reduce the data volume of the frame synchronization data sent by the server each time.
Furthermore, the terminal determines at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of the at least one virtual image based on the frame synchronization data, so as to display each virtual resource and each virtual image.
Further, when the experience of any virtual image is improved, the terminal displays the first target animation based on the virtual image according to the frame synchronization technology, so that the opposite attribute information in the interaction process can be more vividly and vividly expressed in the form of animation, and the interest of the interaction process is further improved.
Further, when the grade of any virtual image is improved, the terminal displays the second target animation based on the virtual image according to the frame synchronization technology, so that the opposite attribute information in the interaction process can be more vividly and vividly expressed in the form of animation, and the interest of the interaction process is further improved.
Furthermore, the terminal determines a second target action of the at least one virtual image according to the interaction result between the at least one virtual resource, so that the interaction result can be vividly simulated in the virtual interaction scene based on the second target action of the virtual image, the interest of the interaction process is increased, and the visual information of the virtual interaction scene is enriched.
Furthermore, the terminal can also display the interaction attribute value of the at least one user on the third target area corresponding to the at least one virtual object, and compared with a method for viewing the interaction attribute value in an interface in which the user switches from a virtual interaction scene to a setting option, the interaction attribute value is displayed based on the virtual image, so that the interaction attribute value can be viewed more intuitively, and the operability of an interaction game is improved.
Further, when the terminal detects that the target user clicks the at least one virtual object, at least one session message can be displayed in a fourth target area corresponding to the at least one virtual object, the session message is displayed based on the virtual image, the sending operation of the session message is facilitated, the efficiency of mutual communication in the interaction process is improved, and the operability of the interactive game is improved. On the basis, when the terminal detects the click operation of the target user on any conversation message, the conversation message can be sent to the server, so that the synchronous display of the conversation message on each terminal can be realized based on the frame synchronization technology.
Fig. 14 is a schematic structural diagram of an avatar display apparatus according to an embodiment of the present invention, and referring to fig. 14, the apparatus includes a display module 1401 and an obtaining module 1402, which are described in detail below:
a display module 1401, configured to display at least one virtual resource controlled by at least one user in a virtual interactive scene;
an obtaining module 1402, configured to obtain at least one target position of the at least one virtual resource in the virtual interactive scene, and a first target action of at least one avatar, the at least one avatar corresponding to the at least one user, the first target action being used to simulate an interaction between the at least one virtual resource;
the display module 1401 is further configured to display the at least one virtual resource on the at least one target location, and display an interactive process of the at least one avatar based on the first target action on the at least one first target area.
The device provided by the embodiment of the invention obtains at least one target position of at least one virtual resource in a virtual interactive scene and a first target action of at least one virtual image by displaying at least one virtual resource controlled by at least one user, a virtual multimedia environment displaying an interactive process and virtual resources operable by the user in the virtual interactive scene, wherein the at least one virtual image corresponds to the at least one user, the first target action is used for simulating the interaction between the at least one virtual resource, so that not only the position change of the virtual resource in the interactive process can be obtained, but also the first target action simulating the interaction with the at least one virtual image can be obtained, the at least one virtual resource is displayed on the at least one target position, the interactive process of the at least one virtual image based on the first target action is displayed on at least one first target area, therefore, the first target action can be executed through the virtual image in the virtual interactive scene, the interactive process among all virtual resources is simulated, the interactive process is vividly and vividly represented, the interestingness of the interactive process is improved, and the intuitiveness of the interactive process in the display process is improved.
In one possible implementation, the obtaining module 1402 is configured to:
receiving frame synchronization data at intervals of a target time length, wherein the frame synchronization data comprise at least one interactive data, and each interactive data corresponds to the interactive operation of a user;
based on the frame synchronization data, the at least one target position and the first target motion of the at least one avatar are determined.
In a possible embodiment, based on the apparatus composition of fig. 14, the apparatus further comprises:
and the sending module is used for sending the interactive data corresponding to the interactive operation when the interactive operation of the target user is detected, wherein the target user is a user corresponding to the terminal.
In one possible embodiment, the sending module is configured to:
when the control operation of the target user on the control resource corresponding to the target user is detected, sending control data corresponding to the control operation; or the like, or, alternatively,
when the experience improvement operation of the target user on the virtual image corresponding to the target user is detected, sending experience improvement data corresponding to the experience improvement operation; or the like, or, alternatively,
and when the grade promotion operation of the target user on the virtual image corresponding to the target user is detected, sending grade promotion data corresponding to the grade promotion operation.
In one possible implementation, the display module 1401 is configured to:
when experience of any virtual image is promoted, displaying a first target animation based on the virtual image, wherein the first target animation is used for representing experience promotion of the virtual image; or the like, or, alternatively,
when the level of any avatar is raised, a second target animation for representing the level-up of the avatar is displayed based on the avatar.
In one possible embodiment, the first target animation is an avatar that is enlarged in size; or, the first target animation swallows the virtual food for the avatar;
the second target animation is the number of rotating target turns of the virtual image; or, the second target animation is a color change of the avatar; or, the second target animation is the outline blurring of the virtual image; or the second target animation is to display the virtual object on the second target area corresponding to the virtual image.
In a possible embodiment, based on the apparatus composition of fig. 14, the apparatus further comprises:
and the determining module is used for determining a second target action of the at least one virtual image according to the interaction result between the at least one virtual resource, and the second target action is used for simulating the interaction result between the at least one virtual resource.
In one possible embodiment, the determining module is configured to:
when the interaction result of any virtual resource is a winner, determining a second target movement of the virtual image corresponding to the user to which the virtual resource belongs as at least one of flying, jumping or dancing;
and when the interaction result of any virtual resource is failure, determining a second target action of the virtual image corresponding to the user to which the virtual resource belongs as at least one of crying, falling down or death.
In a possible embodiment, based on the apparatus composition of fig. 14, the apparatus further comprises:
and displaying the interactive attribute value of the at least one user on a third target area corresponding to the at least one virtual character.
In a possible embodiment, based on the apparatus composition of fig. 14, the apparatus further comprises:
when the clicking operation of a target user on the virtual image corresponding to the target user is detected, displaying at least one session message in a fourth target area corresponding to the virtual image, wherein the target user is a user corresponding to a terminal;
and when the click operation of the target user on any conversation message in the at least one conversation message is detected, sending the conversation message.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the avatar display apparatus provided in the above embodiment, when displaying an avatar, only the division of the above functional modules is used for illustration, in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the above described functions. In addition, the avatar display apparatus provided in the above embodiments and the avatar display method embodiments belong to the same concept, and specific implementation processes thereof are detailed in the avatar display method embodiments and are not described herein again.
Fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement the avatar display method provided by the method embodiments of the present application.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, touch screen display 1505, camera 1506, audio circuitry 1507, positioning assembly 1508, and power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate the current geographic position of the terminal 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of the display on touch screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, there is also provided a storage medium, such as a memory, including at least one instruction executable by a processor in a terminal to perform the avatar display method of the above embodiments. For example, the storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (13)

1. A method for displaying an avatar, the method comprising:
displaying at least one virtual resource controlled by at least one user in a virtual interactive scene;
receiving frame synchronization data at intervals of target duration, wherein the frame synchronization data comprise at least one piece of interaction data, each piece of interaction data corresponds to interaction operation of one user, the interaction data comprise control data, the control data are generated based on control operation executed on a virtual resource, the control operation is that the corresponding user controls a certain virtual resource to initiate attack operation or defense operation to another virtual resource, and the target duration is the frame length of one network frame;
determining at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual image based on control data in the frame synchronization data, wherein the at least one virtual image corresponds to the at least one user, and the first target action is used for simulating a certain virtual resource controlled by the user corresponding to the virtual image to launch attack or defense to another virtual resource;
and displaying the at least one virtual resource on the at least one target position, and displaying the interaction process of the at least one avatar based on the first target action on at least one first target area.
2. The method of claim 1, further comprising:
and when the interactive operation of a target user is detected, sending interactive data corresponding to the interactive operation, wherein the target user is a user corresponding to the terminal.
3. The method of claim 2, wherein when the interactive operation of the target user is detected, sending interactive data corresponding to the interactive operation comprises:
when the control operation of the target user on the control resource corresponding to the target user is detected, sending control data corresponding to the control operation; or the like, or, alternatively,
when experience improvement operation of the target user on the virtual image corresponding to the target user is detected, experience improvement data corresponding to the experience improvement operation are sent; or the like, or, alternatively,
and when the grade promotion operation of the target user on the virtual image corresponding to the target user is detected, sending grade promotion data corresponding to the grade promotion operation.
4. The method of claim 1, wherein said displaying said at least one avatar on at least one first target area based on said first target action comprises:
when experience of any virtual image is improved, displaying a first target animation based on the virtual image, wherein the first target animation is used for representing the experience improvement of the virtual image; or the like, or, alternatively,
when the grade of any virtual image is raised, a second target animation is displayed based on the virtual image, and the second target animation is used for representing the grade raising of the virtual image.
5. The method of claim 4, wherein the first target animation is an avatar becoming larger in size; or, the first target animation swallows the virtual food for the avatar;
the second target animation is the number of rotating target turns of the virtual image; or, the second target animation is a color change of the avatar; or, the second target animation is the outline blurring of the virtual image; or the second target animation is to display the virtual object on the second target area corresponding to the virtual image.
6. The method of claim 1, further comprising:
and determining a second target action of the at least one virtual image according to the interaction result between the at least one virtual resource, wherein the second target action is used for simulating the interaction result between the at least one virtual resource.
7. The method of claim 6, wherein determining the second target action of the at least one avatar based on the result of the interaction between the at least one virtual resource comprises:
when the interaction result of any virtual resource is a winner, determining a second target movement of an avatar corresponding to the user to which the virtual resource belongs as at least one of flying, jumping or dancing;
and when the interaction result of any virtual resource is failure, determining a second target action of the virtual image corresponding to the user to which the virtual resource belongs as at least one of crying, falling down or death.
8. The method of claim 1, further comprising:
and displaying the interaction attribute value of the at least one user on a third target area corresponding to the at least one virtual character.
9. The method of claim 1, further comprising:
when the clicking operation of a target user on an avatar corresponding to the target user is detected, displaying at least one session message in a fourth target area corresponding to the avatar, wherein the target user is a user corresponding to a terminal;
and when the click operation of the target user on any conversation message in the at least one conversation message is detected, sending the conversation message.
10. An avatar display apparatus, said apparatus comprising:
the display module is used for displaying at least one virtual resource controlled by at least one user in a virtual interactive scene;
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for receiving frame synchronization data at intervals of target time length, the frame synchronization data comprise at least one piece of interaction data, each piece of interaction data corresponds to interaction operation of one user, the interaction data comprise control data, the control data are generated based on control operation executed on virtual resources, the control operation is that the corresponding user controls a certain virtual resource to initiate attack operation or defense operation to another virtual resource, and the target time length is the frame length of one network frame; determining at least one target position of the at least one virtual resource in the virtual interactive scene and a first target action of at least one virtual image based on control data in the frame synchronization data, wherein the at least one virtual image corresponds to the at least one user, and the first target action is used for simulating a certain virtual resource controlled by the user corresponding to the virtual image to launch attack or defense to another virtual resource;
the display module is further configured to display the at least one virtual resource on the at least one target location, and display an interaction process of the at least one avatar based on the first target action on the at least one first target area.
11. The apparatus of claim 10, further comprising:
the sending module is used for sending interactive data corresponding to the interactive operation when the interactive operation of a target user is detected, wherein the target user is a user corresponding to the terminal.
12. A terminal, characterized in that the terminal comprises one or more processors and one or more memories, in which at least one instruction is stored, the at least one instruction being loaded and executed by the one or more processors to implement the operations performed by the avatar display method of any of claims 1-9.
13. A storage medium having stored therein at least one instruction, the at least one instruction being loaded and executed by a processor to perform operations performed by the avatar display method of any of claims 1-9.
CN201910394200.2A 2019-05-13 2019-05-13 Virtual image display method, device, terminal and storage medium Active CN110102053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910394200.2A CN110102053B (en) 2019-05-13 2019-05-13 Virtual image display method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910394200.2A CN110102053B (en) 2019-05-13 2019-05-13 Virtual image display method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110102053A CN110102053A (en) 2019-08-09
CN110102053B true CN110102053B (en) 2021-12-21

Family

ID=67489675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910394200.2A Active CN110102053B (en) 2019-05-13 2019-05-13 Virtual image display method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110102053B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083509B (en) * 2019-12-16 2021-02-09 腾讯科技(深圳)有限公司 Interactive task execution method and device, storage medium and computer equipment
CN111135579A (en) * 2019-12-25 2020-05-12 米哈游科技(上海)有限公司 Game software interaction method and device, terminal equipment and storage medium
CN111589139B (en) * 2020-05-11 2023-03-28 深圳市腾讯网域计算机网络有限公司 Virtual object display method and device, computer equipment and storage medium
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN114895970B (en) * 2021-01-26 2024-02-27 博泰车联网科技(上海)股份有限公司 Virtual character growth method and related device
CN113746931B (en) * 2021-09-10 2022-11-22 联想(北京)有限公司 Data synchronization method and device
CN114187429B (en) * 2021-11-09 2023-03-24 北京百度网讯科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN114415907B (en) * 2022-01-21 2023-08-18 腾讯科技(深圳)有限公司 Media resource display method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN108230436A (en) * 2017-12-11 2018-06-29 网易(杭州)网络有限公司 The rendering intent of virtual resource object in three-dimensional scenic
CN108959595A (en) * 2018-07-12 2018-12-07 腾讯科技(深圳)有限公司 Based on virtual and real Website construction and experiential method and its device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7409647B2 (en) * 2000-09-19 2008-08-05 Technion Research & Development Foundation Ltd. Control of interactions within virtual environments
CN108984087B (en) * 2017-06-02 2021-09-14 腾讯科技(深圳)有限公司 Social interaction method and device based on three-dimensional virtual image
CN108066988A (en) * 2018-01-04 2018-05-25 腾讯科技(深圳)有限公司 target setting method, device and storage medium
CN109091869B (en) * 2018-08-10 2022-07-26 腾讯科技(深圳)有限公司 Method and device for controlling action of virtual object, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN108230436A (en) * 2017-12-11 2018-06-29 网易(杭州)网络有限公司 The rendering intent of virtual resource object in three-dimensional scenic
CN108959595A (en) * 2018-07-12 2018-12-07 腾讯科技(深圳)有限公司 Based on virtual and real Website construction and experiential method and its device

Also Published As

Publication number Publication date
CN110102053A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110102053B (en) Virtual image display method, device, terminal and storage medium
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111672104B (en) Virtual scene display method, device, terminal and storage medium
CN109917910B (en) Method, device and equipment for displaying linear skills and storage medium
CN111589140A (en) Virtual object control method, device, terminal and storage medium
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN112007362B (en) Display control method, device, storage medium and equipment in virtual world
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN113181647B (en) Information display method, device, terminal and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111672115B (en) Virtual object control method and device, computer equipment and storage medium
CN110841288B (en) Prompt identifier eliminating method, device, terminal and storage medium
CN113134232A (en) Virtual object control method, device, equipment and computer readable storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN112023403A (en) Battle process display method and device based on image-text information
CN111589117A (en) Method, device, terminal and storage medium for displaying function options
CN114405013A (en) Method, device and equipment for communication between different teams in same pair
CN111589113B (en) Virtual mark display method, device, equipment and storage medium
CN111672107B (en) Virtual scene display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant