CN113426110A - Virtual character interaction method and device, computer equipment and storage medium - Google Patents

Virtual character interaction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113426110A
CN113426110A CN202110704882.XA CN202110704882A CN113426110A CN 113426110 A CN113426110 A CN 113426110A CN 202110704882 A CN202110704882 A CN 202110704882A CN 113426110 A CN113426110 A CN 113426110A
Authority
CN
China
Prior art keywords
virtual character
role
virtual
character
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110704882.XA
Other languages
Chinese (zh)
Other versions
CN113426110B (en
Inventor
郭畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202110704882.XA priority Critical patent/CN113426110B/en
Publication of CN113426110A publication Critical patent/CN113426110A/en
Application granted granted Critical
Publication of CN113426110B publication Critical patent/CN113426110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8058Virtual breeding, e.g. tamagotchi

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a virtual character interaction method, a virtual character interaction device, computer equipment and a storage medium. The method comprises the following steps: displaying the virtual character; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position; changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character; when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character. By adopting the method, the dynamic change of the virtual role can be fully displayed, the displayed information amount is enhanced, and the user can fully know the virtual role.

Description

Virtual character interaction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a virtual character interaction method, apparatus, computer device, and storage medium.
Background
With the development of computer technology, various human-computer interaction applications of computer game types, such as multiplayer online tactical competitive games and battle chess games, become an entertainment mode for more and more people, and a user player can control a selected virtual character to perform game operation in a provided virtual scene. In the scene of the man-machine interaction application, a player can preview a selected virtual character in a preview interface so as to know the image of the virtual character.
At present, when a virtual character is displayed in a preview interface, a virtual character model is often simply put out so that a player can observe conveniently, dynamic interaction between the virtual character and the player is limited, dynamic change of the virtual character cannot be effectively displayed, the displayed information amount is limited, and the user player cannot know the virtual character conveniently.
Disclosure of Invention
Therefore, it is necessary to provide a virtual character interaction method, device, computer device, and storage medium, which can fully display dynamic changes of a virtual character, enhance the displayed information amount, and facilitate users to fully know the virtual character, in order to solve the above technical problems.
A virtual character interaction method, the method comprising:
displaying the virtual character; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position;
changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character;
when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
An apparatus for virtual character interaction, the apparatus comprising:
the role display module is used for displaying virtual roles; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position;
the interactive response module is used for responding to the interactive operation triggered by the virtual character and changing the displayed state of the virtual character; when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
displaying the virtual character; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position;
changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character;
when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
displaying the virtual character; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position;
changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character;
when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
In the virtual character interaction method, the virtual character interaction device, the computer equipment and the storage medium, the eyes of the displayed virtual character present a focus watching state, and the watching sight line indicated by the eyes of the virtual character points to the lens focus position, in the process that the virtual character changes the displayed state in response to the triggered interaction operation, if the state of the virtual character is in the effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the lens focus position, so that in the process of man-machine interaction between a user and the displayed virtual character, in the effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the lens focus position, the eyes of the virtual character and the lens focus position can be kept in a direct-view state, and when the virtual character is in different display states in the interaction process, the dynamic change of the virtual character in the interaction process is still kept in direct view with the user, so that the rich dynamic change of the virtual character in the interaction process is effectively displayed, the displayed information amount is enhanced, the user can know the dynamic change effect of the virtual character in the interaction process, and the user can fully know the virtual character.
Drawings
FIG. 1 is a diagram of an application environment of a method for virtual character interaction in one embodiment;
FIG. 2 is a flowchart illustrating a method for virtual character interaction according to one embodiment;
FIG. 3 is a schematic diagram of an embodiment of an interface change for a left turn of a virtual character;
FIG. 4 is a schematic diagram of an embodiment of an interface change for a right turn of a virtual character;
FIG. 5 is a diagram illustrating interface changes for a virtual character scaling down in one embodiment;
FIG. 6 is a schematic diagram illustrating interface changes for the appearance of a virtual character in one embodiment;
FIG. 7 is a diagram of an interface for gifting a gift to a virtual character, under an embodiment;
FIG. 8 is a diagram illustrating an exemplary interface change for viewing role data for a virtual role in an embodiment;
FIG. 9 is a diagram of an interface displaying a preview background in one embodiment;
FIG. 10 is a flow diagram that illustrates the construction of role resources, in one embodiment;
FIG. 11 is a schematic diagram illustrating an interface of a preview state of a stereoscopic virtual character in one embodiment;
FIG. 12 is a diagram of an interface for a right turn of a stereoscopic virtual character in one embodiment;
FIG. 13 is a diagram of an interface for a left turn of a stereoscopic virtual character in one embodiment;
FIG. 14 is a schematic diagram of the process of constructing role resources in another embodiment;
FIG. 15 is a schematic diagram of an interface for previewing a stereoscopic avatar in one embodiment;
FIG. 16 is a block diagram showing the construction of a virtual character interaction apparatus according to an embodiment;
FIG. 17 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The virtual character interaction method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. A game application may be run on the terminal 102, and game application resources, such as character resources of a virtual character, may be obtained from the server 104, so as to render a presentation in the terminal 102 based on the character resources. The method comprises the steps that a preview interface of the game application is displayed in a terminal 102, a virtual character selected by a user is displayed in the preview interface, the eyes of the displayed virtual character present a focus watching state, the watching sight line indicated by the eyes of the virtual character points to a lens focus position, the user can trigger interactive operation on the displayed virtual character, the terminal 102 responds to the interactive operation triggered by the user to change the displayed state of the virtual character, and in the process that the displayed state of the virtual character is changed, if the state of the virtual character is in an effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character points to the lens focus position. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, vehicle-mounted devices, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a virtual character interaction method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, displaying virtual roles; the eyes of the virtual character assume a focal gaze state such that a gaze line indicated by the eyes of the virtual character is directed at a lens focal location.
The virtual character refers to an object that can be interacted with by a user in a virtual environment, and the object may be a virtual character, a virtual animal, an animation character, and the like, for example: the virtual character may be a character, animal, etc. displayed in the virtual environment. A virtual environment is a virtual scene provided by an application running on a terminal. The virtual environment may be a three-dimensional virtual environment or a two-dimensional virtual environment. The three-dimensional virtual environment can be a simulation scene of a real world, a semi-simulation semi-fictional scene, or a pure fictional scene. Correspondingly, the virtual character can be a two-dimensional virtual character or a three-dimensional virtual character in different scenes, wherein the three-dimensional virtual character is a three-dimensional virtual character, and specifically can be a three-dimensional model created based on an animation skeleton technology, and each three-dimensional virtual character has a shape and a volume corresponding to the virtual character in a three-dimensional virtual environment, so that a corresponding character image is presented. The watching state refers to a state that the virtual character looks at a certain direction or position, the focus watching state refers to a state that the virtual character looks at a lens focus position, and the lens focus position can be a position where the terminal 102 captures a displayed picture, that is, the picture displayed by the terminal 102 is captured from the lens focus position, and the position of the eyes of the user when the user looks at the terminal 102 can be simulated through the lens focus position. Specifically, the terminal 102 is operated by the user, so that the user captures a picture of the virtual character through the lens focus position, presents a view angle at which the user views the virtual character, forms a picture of the user view angle, and the eyes of the virtual character look at the lens focus position, thereby presenting an interaction effect that the virtual character looks directly at the user.
Specifically, the terminal 102 displays the virtual character, and specifically, the virtual character may be displayed in a character preview interface of the terminal 102. The eyes of the displayed virtual character present a focus gaze state, i.e. the gaze line indicated by the eyes of the virtual character is directed to the lens focus position, so that the virtual character and the user form a direct-view interaction effect. The gaze line of sight is a line of sight formed by the orientation of the eyes of the virtual character, the gaze line of sight indicated by the eyes of the virtual character being directed at the lens focus position, i.e., such that the eyes of the virtual character are directed at the lens focus position. In a specific application, in the role preview interface of the terminal 102, the virtual role may also be displayed in different postures, for example, the virtual role may repeatedly display different actions, such as breathing, calling, standing, walking, or running, and the posture of the virtual role may correspond to the virtual role, that is, different virtual roles may set corresponding postures to display in the terminal 102, so as to improve the sense of reality of the virtual role.
In step 204, the displayed state of the virtual character is changed in response to the interaction triggered for the virtual character.
The interactive operation may be an operation triggered by a user to interact with the virtual character, such as various types of operations triggered by the user to the virtual character, such as a rotation operation, a translation operation, a zoom operation, and a limb contact operation. The displayed state of the virtual character refers to an effect presented by the virtual character, and may specifically include various states such as an angle, a position, and an action displayed by the virtual character, and the displayed state of the virtual character is changed, so that different effects of the virtual character can be presented in the terminal 102. For example, the angle at which the table virtual character is displayed can show the display effect of the virtual character under different observation angles.
Specifically, in the virtual character displayed in the terminal 102, the user may trigger an interactive operation for the virtual character, and when the terminal 102 monitors that the user triggers the interactive operation for the virtual character, the terminal 102 changes the displayed state of the virtual character correspondingly in response to the interactive operation. For example, when the virtual character is a three-dimensional virtual character, and the user can rotate the virtual character, the terminal 102 may correspondingly display the rotated virtual character according to the rotation angle of the user, so as to change the display state of the virtual character, and specifically change the display angle of the virtual character. By changing the displayed state of the virtual character, the interactive processing between the user and the virtual character can be realized by responding to the interactive operation triggered by the user, so that the user can fully know the virtual character.
And step 206, when the state of the virtual character is in the effective state range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
The effective state range refers to a state range for keeping a focus watching state, and when the state of the virtual character is in the effective state range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focal position of the lens.
Specifically, a user triggers interactive operation on the virtual character, the virtual character gradually changes the corresponding display state, and in the process of changing the state of the virtual character, if the state of the virtual character is in the effective state range for keeping the focus watching state, the watching state of the virtual character in the process of changing the state is not changed, the virtual character is kept in the focus watching state, namely, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focal position of the lens, so that the virtual character can be kept to be directly viewed with the user in the process of changing the display state, the substitution feeling of the user is improved, the interactive experience of the virtual character and the user is enhanced, meanwhile, the dynamic change of the virtual character which is still directly viewed with the user can be shown when the virtual character is in different display states in the interactive process, and the rich dynamic change of the virtual character in the interactive process is effectively shown, the information quantity displayed by the virtual role is enhanced, and the user can know the dynamic change effect in the virtual role interaction process, so that the user can fully know the virtual role.
In the virtual character interaction method, the eyes of the displayed virtual character present a focus watching state, and the watching sight line indicated by the eyes of the virtual character points to the lens focus position, and in the process that the virtual character changes the displayed state in response to the triggered interaction operation, if the state of the virtual character is in the effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character points to the lens focus position, so that in the process of the human-computer interaction between the user and the displayed virtual character, the watching sight line indicated by the eyes of the virtual character is kept to the lens focus position in the effective state range for keeping the focus watching state, the eyes of the virtual character and the lens focus position can be kept in a direct-viewing state, the substitution feeling of the user is improved, and the interaction experience between the virtual character and the user is enhanced, meanwhile, when the virtual character is in different display states in the interaction process, the dynamic change of the virtual character which is kept in direct vision with the user can be displayed, so that the abundant dynamic change of the virtual character in the interaction process is effectively displayed, the displayed information amount is enhanced, the user can know the dynamic change effect of the virtual character in the interaction process, and the user can fully know the virtual character.
In one embodiment, the virtual character is a stereoscopic virtual character, and changing the displayed state of the virtual character in response to an interactive operation triggered for the virtual character comprises: and adjusting the displayed angle of the stereoscopic virtual character in response to the rotation operation triggered by the stereoscopic virtual character.
The displayed virtual roles are three-dimensional virtual roles, the three-dimensional virtual roles can be three-dimensional models created based on animation skeleton technology, and each three-dimensional virtual role has a shape and a volume corresponding to the virtual role in a three-dimensional virtual environment and presents a corresponding role image. The interactive operation triggered by the user aiming at the stereoscopic virtual character comprises a rotation operation, namely, the user can rotate the stereoscopic virtual character to view the image of the stereoscopic virtual character at different angles. The rotation operation can be triggered by a user through a rotation control or directly aiming at the three-dimensional virtual role through a rotation gesture, and the triggering of the rotation operation can be specifically configured according to the actual scene requirement. For example, for a stereoscopic virtual character displayed in the interface of the terminal 102, a user may trigger interaction with the stereoscopic virtual character through a spin control in the interface; for the stereoscopic virtual character displayed in the augmented reality and the virtual reality, the user can trigger the rotation operation on the stereoscopic virtual character through a preset rotation gesture. The rotating direction can also be set according to actual needs, and specifically, the rotating direction can include various types of rotating processes such as left-hand rotating, right-hand rotating, up-down turning and the like. Furthermore, the rotation angle range of the virtual character can be set according to actual needs, for example, the rotation angle range can be set to be rotation within a certain range, and the rotation angle range can also be set to be 360-degree omni-directional rotation.
Specifically, the user triggers a rotation operation for the stereoscopic virtual character, the terminal 102 determines a rotation angle corresponding to the rotation operation in response to the rotation operation, and adjusts an angle displayed by the stereoscopic virtual character based on the rotation angle, so that the stereoscopic virtual character is rotatably displayed, images of the stereoscopic virtual character at different angles can be displayed, the information content displayed by the stereoscopic virtual character is enhanced, and the user can fully know the stereoscopic virtual character.
Further, when the virtual character is in a state within an effective state range for maintaining a focus gaze state, maintaining a gaze line indicated by an eye of the virtual character pointing to a lens focus position during a state change of the virtual character, including: when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, the watching sight line indicated by the eyes of the stereoscopic virtual character is kept pointing to the lens focus position in the process of adjusting the angle of the stereoscopic virtual character.
The effective angle range refers to an angle range for keeping a focus watching state, and when the angle of the stereoscopic virtual character is within the effective angle range, the stereoscopic virtual character keeps the focus watching state, namely, the eyes of the stereoscopic virtual character keep looking at the focal position of the lens. The effective angle range can be flexibly set according to actual needs, for example, the effective angle range can be set to be 30 degrees in the left direction and 50 degrees in the right direction, namely, the stereoscopic virtual character can keep a focus watching state in the range of rotating the stereoscopic virtual character by 30 degrees in the left direction or 50 degrees in the right direction. For different three-dimensional virtual characters, the three-dimensional virtual characters have different images, the displayed states can be different, and the corresponding effective angle ranges can be different. For example, for the stereoscopic virtual character a, the effective angle range may be [ -30, +50], i.e., 30 degrees to 50 degrees from left to right; for the three-dimensional virtual character B, the effective angle range can be [ -15, +40], and the left direction is 15 degrees to the right direction is 40 degrees.
Specifically, a user triggers a rotation operation for a stereoscopic virtual character, the angle of the stereoscopic virtual character displayed in the terminal 102 is rotationally adjusted, in the process of adjusting the angle of the stereoscopic virtual character, the terminal 102 judges the angle displayed by the stereoscopic virtual character and an effective angle range for maintaining a focus watching state, and if the angle displayed by the stereoscopic virtual character is determined to be within the effective angle range, in the process of adjusting the angle of the stereoscopic virtual character, a watching sight line indicated by eyes of the stereoscopic virtual character is kept pointing to a lens focus position, namely, the eyes of the stereoscopic virtual character are kept looking at the lens focus position, so that a state effect of keeping a direct view with the user is realized.
In a specific application, as shown in fig. 3, a stereoscopic virtual character displayed in a terminal flies, and is displayed in a posture that a lens focus position is directly viewed from the front, a user can trigger a rotation operation by clicking a control that rotates left, the stereoscopic virtual character flies to rotate left, and when a rotation angle is within an effective angle range for maintaining a focus watching state, a watching sight line indicated by eyes of the stereoscopic virtual character flies to the lens focus position in a rotation process, that is, the stereoscopic virtual character flies to maintain the lens focus position. In another specific application, as shown in fig. 4, a stereoscopic virtual character displayed in the terminal flies away and is displayed in a posture that a lens focus position is directly viewed from the front, a user can trigger a rotation operation by clicking a control member which rotates to the right, the stereoscopic virtual character flies away and rotates to the right, and when the rotation angle is within an effective angle range for keeping a focus watching state, a watching sight line indicated by eyes of the stereoscopic virtual character flies away points to the lens focus position in the rotation process, that is, the stereoscopic virtual character flies away and keeps the lens focus position directly viewed.
In this embodiment, for a rotation operation triggered by a user on a stereoscopic virtual character, the terminal 102 adjusts an angle displayed by the stereoscopic virtual character, displays images of the stereoscopic virtual character at different angles, and in the process of adjusting the angle of the stereoscopic virtual character, if the angle displayed by the stereoscopic virtual character is within an effective angle range for maintaining a focus watching state, keeps a watching sight line indicated by eyes of the stereoscopic virtual character pointing to a lens focus position, so as to achieve an effect of maintaining a direct-view state with the user, thereby enhancing an interaction experience of the stereoscopic virtual character with the user, and meanwhile, displaying a dynamic change of the stereoscopic virtual character which is still maintained at the direct-view state with the user when the stereoscopic virtual character is at different angles, effectively displaying a dynamic change of the stereoscopic virtual character, enhancing an information amount displayed by the stereoscopic virtual character, and facilitating the user to fully know the virtual character.
In one embodiment, when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus state, the process of keeping the gaze line indicated by the eyes of the stereoscopic virtual character pointing to the lens focus position in the process of adjusting the angle of the stereoscopic virtual character comprises the following steps: when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, in the process of adjusting the angle of the stereoscopic virtual character, the head direction of the stereoscopic virtual character is adjusted, so that the watching sight line indicated by the eyes of the stereoscopic virtual character is kept pointing to the focal position of the lens.
Specifically, in the process of adjusting the angle of the stereoscopic virtual character, the gaze line indicated by the eyes of the stereoscopic virtual character is kept pointing to the focal position of the lens, and the direction of the head of the stereoscopic virtual character is adjusted. Specifically, when the terminal 102 determines that the angle displayed by the stereoscopic virtual character is within the effective angle range for maintaining the focus watching state, in the process of adjusting the angle of the stereoscopic virtual character, the head direction of the stereoscopic virtual character is adjusted, so that the watching line of sight indicated by the eyes of the stereoscopic virtual character is kept pointing to the focal position of the lens. When the terminal 102 responds to the rotation operation triggered by the user and adjusts the displayed angle of the stereoscopic virtual character, the entire stereoscopic virtual character is rotated according to the direction specified by the rotation operation, and in the rotation process, if the displayed angle of the stereoscopic virtual character is within the effective angle range for maintaining the focus watching state, that is, the stereoscopic virtual character needs to look at the lens focus position, the head direction of the stereoscopic virtual character is further adjusted, so that the watching sight line indicated by the eyes of the stereoscopic virtual character keeps pointing at the lens focus position. Further, when the head orientation of the stereoscopic virtual character is adjusted, the eyes of the stereoscopic virtual character can be adjusted at the same time, for example, the eyes of the stereoscopic virtual character are rotated, so that the gaze line indicated by the eyes of the stereoscopic virtual character is kept pointing to the focal position of the lens, and the holding processing of the focal gaze state is realized.
In this embodiment, the terminal 102 adjusts the head orientation of the virtual character to enable the gaze line indicated by the eyes of the virtual character to be directed to the focal position of the lens, so that the virtual character can look at the focal position of the lens more naturally, the interaction experience between the virtual character and the user is enhanced, and meanwhile, the dynamic change effect that the virtual character still keeps direct vision with the user when the virtual character is displayed at different angles can be further enhanced, thereby effectively displaying the dynamic change of the virtual character, enhancing the information content displayed by the virtual character, and being beneficial to the user to fully know the virtual character.
In one embodiment, changing the displayed state of the virtual character in response to the interaction triggered for the virtual character includes: and adjusting the position displayed by the virtual character in response to the display position adjusting operation triggered by the virtual character.
The interactive operation triggered by the user aiming at the virtual character comprises a display position adjusting operation, namely, the user can adjust the display position of the virtual character to transfer the virtual character to different positions and view the image expression of the virtual character at different positions, such as viewing the image expression of the virtual character at different background terminals. The display position adjustment operation may include a panning operation, a zooming operation, and the like that may change the position of the virtual character. The translation operation may be an operation of dragging the virtual character in the interface of the terminal 102 by the user, so as to display the virtual character at different positions of the interface, so that the user may perform other operations in the interface, such as viewing character data of the virtual character; the zoom operation may be an operation in which the user zooms the virtual character in the interface of the terminal 102 to change the displayed size of the virtual character. Through the adjustment operation of the display position, the user can set the distribution of the displayed virtual character, so that the user can fully know the image of the virtual character and can perform other types of operation in a corresponding space in the interface. In specific application, various types of display position adjustment operations such as translation operation and zoom operation can be simultaneously effective, for example, zoom operation can be triggered on a virtual character first, and then translation operation is triggered on the zoomed zoom operation, so that the display position of the virtual character can be adjusted more flexibly, and the operation efficiency is improved.
Further, the display position adjustment operation may be triggered by a user through a position adjustment control, or may be triggered by a user directly aiming at the virtual character through a position adjustment gesture, and the triggering of the display position adjustment operation may be specifically configured according to the actual scene needs. For example, for a virtual character displayed in the interface of the terminal 102, a user may trigger interaction with the virtual character through a position adjustment control in the interface; for virtual characters displayed in augmented reality and virtual reality, a user can trigger the display position adjustment operation of the virtual character through a preset position adjustment gesture.
Specifically, a user triggers a display position adjustment operation for a virtual character, and the terminal 102 determines a position adjustment parameter corresponding to the display position adjustment operation in response to the display position adjustment operation, and adjusts a position displayed by the virtual character based on the position adjustment parameter, so that the virtual character is displayed at different positions, images of the virtual character at different angles can be displayed, the information content displayed by the virtual character is enhanced, and the user can conveniently and fully know the virtual character.
Further, when the virtual character is in a state within an effective state range for maintaining a focus gaze state, maintaining a gaze line indicated by an eye of the virtual character pointing to a lens focus position during a state change of the virtual character, including: when the position displayed by the virtual character is in the effective position range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focus position of the lens in the process of adjusting the position of the virtual character.
The effective position range refers to a position range for keeping a focus watching state, and when the position of the virtual character is in the effective position range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focus position of the lens. The effective position range can be flexibly set according to actual needs, for example, the effective position range can be set to be within a circular range with a certain radius around the initial position where the virtual character is located, namely, when the virtual character is adjusted to be within the circular range with the certain radius around the initial position, the virtual character can keep a focus watching state. Different virtual characters have different images, the virtual characters have different sizes and volumes, the displayed states can be different, and the corresponding effective position ranges can be different. For example, for avatar C, its valid position range may be radius r1, i.e., within a circle of radius r 1; for avatar D, its valid position range may be within a circle of radius r1, radius r 2.
Specifically, a user triggers a display position adjustment operation for a virtual character, the position of the virtual character displayed in the terminal 102 is adjusted, in the process of adjusting the position of the virtual character, the terminal 102 judges the position displayed by the virtual character and an effective position range for keeping a focus watching state, if the position displayed by the virtual character is determined to be in the effective position range, in the process of adjusting the position of the virtual character, a sight line indicated by eyes of the virtual character is kept pointing to a lens focus position, namely, the eyes of the virtual character are kept looking at the lens focus position, and a state effect of keeping direct vision with the user is achieved.
In a specific application, as shown in fig. 5, a stereoscopic virtual character displayed in a terminal flies, and is displayed in a posture that a lens focus position is viewed from the front, and a user can trigger zooming of the stereoscopic virtual character by a zooming gesture, so as to trigger a display position adjustment operation, and change a displayed position of the stereoscopic virtual character, when the displayed position of the stereoscopic virtual character is within an effective position range, in the process of adjusting the position of the stereoscopic virtual character, a watching sight line indicated by an eye of the stereoscopic virtual character flies points to the lens focus position, that is, the stereoscopic virtual character flies to keep the lens focus position.
In the embodiment, aiming at the display position adjustment operation triggered by the virtual character by the user, the terminal 102 adjusts the position displayed by the virtual character, displays the image of the virtual character at different positions, and in the process of adjusting the position of the virtual character, if the position displayed by the virtual character is within the effective position range of the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focal position of the lens, so that the effect of keeping the direct-view state with the user is realized, thereby enhancing the interactive experience of the virtual character and the user, and simultaneously displaying the dynamic change of the virtual character which still keeps the direct-view state with the user when the virtual character is at different display positions, effectively displaying the dynamic change of the virtual character, enhancing the information quantity displayed by the virtual character, and being beneficial for the user to fully know the virtual character.
In one embodiment, changing the displayed state of the virtual character in response to the interaction triggered for the virtual character includes: and in response to the limb contact operation triggered by the virtual character, controlling the virtual character to show the action matched with the limb contact operation.
The interactive operation triggered by the user aiming at the virtual character comprises limb contact operation, namely the user can trigger limb contact on the virtual character so as to realize limb contact interaction of the user on the virtual character. For example, the user can click on the hand, face, head and other parts of the virtual character, and the virtual character can make corresponding responses based on different parts, for example, different feedback actions can be made, so that the rich dynamic change image of the virtual character is displayed through the interaction of limb contact operation. The type of the limb contact operation can be preset according to actual needs, for example, the type of the limb contact operation can be used for triggering a response feedback action by clicking a body part. Different virtual characters can be correspondingly provided with different types of limb contact operations, and the response to the limb contact operations can also be different. For example, for virtual character a, the limb contact operations it supports include clicker and head, and the corresponding feedback actions are action1 and action 2; for virtual character B, the limb contact operations it supports include clicker, head and face, the corresponding feedback actions being action 3, action4 and action 5.
Further, the limb contact operation may be triggered by the user through a limb contact control, or may be triggered by the user directly performing contact on the limb of the virtual character, or may be triggered by the user through a limb contact gesture, and the triggering of the limb contact operation may be specifically configured according to the actual scene needs. For example, for a virtual character displayed in the interface of the terminal 102, the user may contact the limb of the virtual character displayed in the interface to trigger the interaction of limb contact on the virtual character; and for the virtual characters displayed in the augmented reality and the virtual reality, the user can trigger the limb contact operation on the virtual characters through the preset limb contact gesture.
Specifically, the user triggers a limb contact operation for the virtual character, and the terminal 102 determines a motion matched with the limb contact operation in response to the limb contact operation, and enables the virtual character to display the motion matched with the limb contact operation, so that the virtual character can display corresponding feedback motions for different limb contact operations, images of the virtual character at different angles can be displayed, the information quantity displayed by the virtual character is enhanced, and the user can conveniently and fully know the virtual character.
Further, when the virtual character is in a state within an effective state range for maintaining a focus gaze state, maintaining a gaze line indicated by an eye of the virtual character pointing to a lens focus position during a state change of the virtual character, including: when the motion displayed by the virtual character is in the effective motion range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the motion by the virtual character.
The effective action range refers to an action range for keeping a focus watching state, and when the action displayed by the virtual character is within the effective action range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focus position of the lens. The effective action range can be flexibly set according to actual needs, for example, the effective action range can be determined according to the action amplitude of the action shown by the virtual character, when the action amplitude of the action shown by the virtual character is smaller, the effective action range can be considered to belong to, and when the action amplitude of the action shown by the virtual character is larger, the effective action range is not considered to belong to. Different virtual characters have different images, the sizes and the sizes of the virtual characters are different, the displayed action changes can be different, and the corresponding effective action ranges can be different. For example, for the virtual character C, the effective motion range may be a hand feedback motion and a head feedback motion, that is, for the triggered limb contact operation of the hand and the head, both the hand feedback motion and the head feedback motion fed back by the virtual character C belong to motions within the effective motion range; for the virtual character D, the effective action range can be head feedback action, namely the head feedback action fed back by the virtual angle D belongs to the action within the effective action range aiming at the triggering limb contact operation of the head.
Specifically, a user triggers limb contact operation aiming at a virtual character, the terminal 102 controls the virtual character to display the action matched with the limb contact operation, the terminal 102 judges the action displayed by the virtual character and the effective action range for keeping the focus watching state, if the action displayed by the virtual character is determined to be in the effective action range, in the process of adjusting the action of the virtual character, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focus position of the lens, namely the eyes of the virtual character are kept looking at the focus position of the lens, and the effect of keeping the direct vision state with the user is achieved.
In the embodiment, for the limb contact operation triggered by the virtual character by the user, the terminal 102 controls the motion exhibited by the virtual character, specifically controls the virtual character to exhibit the motion matched with the limb contact operation, and in the process of exhibiting the motion by the virtual character, if the motion exhibited by the virtual character is within the effective motion range of maintaining the focus state, the gaze line indicated by the eyes of the virtual character is kept pointing to the focal position of the lens, so that the effect of maintaining the direct-view state with the user is realized, thereby enhancing the interactive experience of the virtual character and the user, and simultaneously, displaying the dynamic change of the virtual character which is still kept direct-view with the user when the virtual character exhibits different motions, effectively displaying the dynamic change of the virtual character, enhancing the information quantity exhibited by the virtual character, and being beneficial for the user to fully know the virtual character.
In one embodiment, displaying the virtual character includes: responding to the role preview triggering operation, and displaying a role preview interface; and displaying the virtual role selected by the role preview triggering operation in a role preview area of the role preview interface.
The role preview triggering operation may be an operation of previewing a role triggered by a user, for example, an operation triggered by clicking a role preview control by the user, or an operation of selecting a virtual role when the user selects a role, so as to trigger display of the virtual role. The role preview interface can be an interface for providing preview for various virtual roles by a user, and the role preview interface can be an independent interface or a sub-interface embedded in other interfaces. For example, the character preview interface may be an interface dedicated to preview a character, preview information related to the virtual character is displayed in the character preview interface, and the character preview interface does not relate to the running of a game; the character preview interface can also be embedded in the game running process, for example, the character preview interface can be displayed in the game corner selection process, and the virtual character selected by the user can be displayed in the character preview interface in real time. The role preview area is an area for displaying the virtual role in the interface, and the virtual role required to be previewed by the user is displayed in the area.
Specifically, the user may trigger a character preview triggering operation in the terminal 102 to trigger a preview of the virtual character in the application, and may specifically preview various preview information related to the virtual character, such as the overall image, the interactive action, and the character data of the virtual character. The terminal 102 responds to a role preview trigger operation triggered by a user, for example, the user clicks a role preview control in an application, the terminal 102 displays a role preview interface, and displays a virtual role selected by the role preview trigger operation in a role preview area of the role preview interface. The displayed virtual character may be presented in a predetermined pose, such as may be presented in a character preview area in a breathing pose.
In this embodiment, the terminal responds to the role preview trigger operation triggered by the user, and displays the virtual role selected by the role preview trigger operation in the role preview area in the displayed role preview interface, so that the virtual role can be displayed in the role preview area, so that the user can fully view the virtual role in the role preview interface, and the user can know role information of the virtual role, such as role information and role image.
In one embodiment, displaying the virtual character selected by the character preview triggering operation in the character preview area in the character preview interface comprises: displaying the departure action of the virtual role selected by the role preview trigger operation in a role preview area in a role preview interface; after the end of the leaving action, the virtual character is displayed in the character preview area in a preview posture.
The departure action can be flexibly set according to actual needs, different departure actions can be correspondingly set for different virtual roles, and one or more departure actions can be set for the virtual roles so as to display different departure actions in different scenes. The action may be an animation of the virtual character, such as an action that may be a running-in from a distance, a landing-in from above, an action that shows a dance in place, and so forth. The preview posture is a state of the virtual character in preview, and specifically can include various postures such as standing, breathing, calling and the like. The preview gesture can be configured for each virtual role according to actual needs.
Specifically, when the terminal 102 displays the selected virtual character in the character preview area, the terminal 102 displays the action of the character preview trigger operation on the appearance of the selected virtual character in the character preview area in the character preview interface. Specifically, the terminal 102 may determine the role identifier of the selected virtual role, and query the corresponding preset departure action based on the role identifier, so as to display the departure action of the virtual role selected by the role preview trigger operation in the role preview area in the role preview interface. After the end of the departure operation, the virtual character is displayed in the character preview area in a preview posture, and the virtual character is displayed in the character preview area in a posture in which the user can breathe with one hand standing on his/her waist.
In a specific application, as shown in fig. 6, when displaying the three-dimensional virtual character, the terminal displays the departure action of the selected three-dimensional virtual character, specifically displays the departure action of the three-dimensional virtual character with two hands lifted laterally, displays the three-dimensional virtual character with the preview posture of one hand across the waist after the departure action is finished, the eyes of the three-dimensional virtual character with the eyes of the three-dimensional virtual character pointing to the focal position of the lens.
In the embodiment, when the terminal displays the virtual character, the terminal firstly displays the departure action of the selected virtual character, and displays the virtual character in the preview posture after the departure action is finished, so that the action change of the virtual character displayed in the character preview interface is enriched, richer character information of the virtual character is displayed, and the user can know the virtual character conveniently.
In one embodiment, the virtual character interaction method further comprises: displaying an item gift inlet associated with the virtual character in a preview operation area of the character preview interface; in response to an item gifting operation triggered by an item gifting inlet, controlling the virtual character to display a feedback action for the gifted item; when the feedback action displayed by the virtual character is in the effective action range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focus position of the lens in the process of displaying the action by the virtual character.
The preview operation area is an area provided in the character preview interface and operated by the user to the previewed virtual character, and various preview operation options such as gift giving, dress changing and the like can be displayed in the preview operation area. The item presentation inlet is an inlet for presenting gifts to the previewed virtual character, and the user can present the items to the virtual character through the item presentation inlet, so that interaction based on the presented items is realized. The item presentation operation is an operation of presenting items to the virtual character triggered by the item presentation entry by the user, and specifically, various virtual items such as game props, dresses up clothes and the like can be presented. The feedback action is a reaction of the virtual character corresponding to the item gifting operation, and for example, the feedback action may be various interactive actions such as an action of expressing a favorite given to the item by the user, an action of expressing a thank you to the user, and the like. The feedback action can be flexibly set according to actual needs, different types of feedback actions can be preset for different virtual roles, the feedback action can also be related to the types of the items presented by the item presenting operation, and for different types of items presented by the user, the virtual roles can display different feedback actions, so that the feedback expression of the virtual roles is enriched.
Specifically, the character preview interface of the terminal further includes a preview operation area, and the user can perform a preview operation on the virtual character in the preview operation area. And an item gift inlet is displayed in the specific preview operation area, so that the user can carry out interactive processing of item gift on the virtual character. The user can trigger the item list by clicking the item presentation inlet, the user can select the target item in the item list to trigger the item presentation operation, so that the target item is presented to the virtual character, the terminal responds to the item presentation operation triggered by the user, determines the feedback action of the virtual character on the presented item, and controls the virtual character to display the corresponding feedback action in the character preview area. In a specific application, the terminal may further determine a present item of the user, query a feedback action set corresponding to the virtual character based on the present item, and determine a feedback action matched with the present item from the feedback action set to control the virtual character to display the feedback action.
Further, in the process of displaying the feedback action by the virtual character, the terminal determines whether the feedback action displayed by the virtual character is in an effective action range for keeping the focus watching state, if so, in the process of adjusting the action by the virtual character, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focal position of the lens, and the effect of keeping the direct vision state with the user is realized. The effective action range refers to an action range for keeping a focus watching state, and when the action displayed by the virtual character is within the effective action range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focus position of the lens. The effective action range can be flexibly set according to actual needs, for example, the effective action range can be determined according to the action amplitude of the action shown by the virtual character, when the action amplitude of the action shown by the virtual character is smaller, the effective action range can be considered to belong to, and when the action amplitude of the action shown by the virtual character is larger, the effective action range is not considered to belong to.
In a specific application, as shown in fig. 7, a stereoscopic virtual character displayed in a terminal flies and is displayed in a posture that a lens focus position is directly viewed from the front, a control for presenting gifts is displayed on the right side of the stereoscopic virtual character, a user can give gifts to the stereoscopic virtual character by clicking the control, the stereoscopic virtual character flies and displays feedback actions for the given gifts, so that corresponding feedback is made for the gifts, and in the process of the stereoscopic virtual character flies and adjusting actions, if the feedback actions displayed by the stereoscopic virtual character flies are within an effective action range for keeping a focus watching state, a watching sight line indicated by eyes of the stereoscopic virtual character flies is kept pointing to the lens focus position, so that a state effect of keeping a direct view with the user is achieved.
In the embodiment, an item gift inlet associated with a virtual character is displayed in a preview operation area of a character preview interface, a user can trigger an item gift operation on the item gift inlet to gift an item to the virtual character, a terminal adjusts the virtual character to display a feedback action for the gift item in response to the item gift operation, so that corresponding feedback is given for the item gift operation, and in the process of adjusting the virtual character, if the feedback action displayed by the virtual character is within an effective action range for keeping a focus watching state, the watching sight indicated by the eyes of the virtual character is kept pointing to a lens focus position, so that a state effect of keeping direct vision with the user is realized, thereby the interactive experience between the virtual character and the user is enhanced, and dynamic changes of keeping direct vision with the user when the virtual character displays the feedback action for the gift item gift can be shown, the dynamic change of the virtual role is effectively displayed, the information quantity displayed by the virtual role is enhanced, and the user can fully know the virtual role.
In one embodiment, the virtual character interaction method further comprises: displaying a role information inlet associated with the virtual role in a preview operation area of a role preview interface; responding to role information triggering operation triggered by the role information inlet, and displaying role information corresponding to the virtual role; and adjusting the distribution position of the role preview area in the role preview interface according to the role information, and displaying the virtual role in the adjusted role preview area.
The preview operation area is an area provided in the character preview interface and operated by the user to preview the virtual character, and various preview operation options such as gift giving, dress changing, character information displaying and the like can be displayed in the preview operation area. The role information entry is an entry for displaying role information corresponding to the previewed virtual role, and the role information comprises various information related to the virtual role, such as role background, sound quality, role skill, role talent and the like of the virtual role. Through the role information entry, the user can view various types of information of the virtual role. The role information triggering operation is an operation of displaying role information of a virtual role triggered by a user, and specifically may be a triggering operation of the user for a role information entry. The distribution position refers to a partition area range of the character preview area in the character preview interface, and the distribution position of the character preview area is adjusted so as to divide enough space in the character preview interface to display the character information corresponding to the virtual character, thereby facilitating the user to preview the character information corresponding to the virtual character.
Specifically, the character preview interface of the terminal further includes a preview operation area, and the user can perform a preview operation on the virtual character in the preview operation area. And displaying a role information inlet associated with the virtual role in the specific preview operation area so that a user can acquire the role information of the virtual role. The user can trigger the role information triggering operation by clicking the role information inlet so as to check the role information corresponding to the virtual role, the terminal responds to the role information triggering operation triggered by the user, inquires the role information corresponding to the virtual role and displays the inquired role information corresponding to the virtual role. In specific application, the terminal can further determine the role information type corresponding to the role information trigger operation of the user, and inquire the role information of the corresponding type for display based on the role information type so as to display the role information of the corresponding type according to the actual needs of the user.
Further, the terminal adjusts the distribution position of the character preview area on the character preview interface according to the displayed character information, so that the character preview area is reserved and displayed while the character information is displayed, and the virtual character is displayed in the adjusted character preview area. For example, the terminal may determine, in response to a role information triggering operation triggered by the role information entry, a range of an occupied area when the role information is displayed, and adjust a distribution position of the role preview area on the role preview interface based on the occupied range of the role information, for example, reduce the role preview area in an equal proportion, move the role preview area to an edge area of the role preview interface, and display the virtual role in the adjusted role preview area. Further, the virtual character displayed in the adjusted character preview area can be maintained to be displayed in a preset preview posture, and meanwhile, the gaze line indicated by the eyes of the virtual character can point to a preset position, such as the gaze line can point to the front of the virtual character, the focus position of a lens or the displayed character information, and the specific gaze line can be flexibly set according to actual needs.
In a specific application, as shown in fig. 8, a virtual character displayed in a terminal flies, the virtual character is displayed in a posture that a lens focus position is directly viewed from the front, a control of a character data entry is displayed on the right side of the virtual character flyover, a user can check the data of the virtual character flyover by clicking the control, the terminal displays character information corresponding to the virtual character flyover, the distribution position of a character preview area on a character preview interface is translated to the left, and the virtual character flyover is displayed in the adjusted character preview area.
In this embodiment, a role information entry associated with a virtual role is displayed in a preview operation area of a role preview interface, a user can trigger a role information trigger operation on the role information entry to trigger display of role information corresponding to the virtual role, a terminal responds to the role information trigger operation to display the role information corresponding to the virtual role, and the distribution position of the role preview area on the role preview interface is adjusted according to the role information, for example, the role preview area is reduced and then translated to the edge position of the role preview interface, displaying the virtual character in the adjusted character preview area, thereby triggering to display the character information corresponding to the virtual character and simultaneously keeping displaying the virtual character in the character preview area, the information quantity related to the virtual role displayed in the role preview interface can be enriched, and the user can fully know the virtual role.
In one embodiment, the virtual character interaction method further comprises: responding to a skill preview operation triggered by the character skill in the character information, and displaying a skill releasing action matched with the target skill selected by the skill preview operation through the virtual character in the adjusted character preview area; when the skill releasing action displayed by the virtual character is in the effective action range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the action by the virtual character.
The role information corresponding to the virtual role comprises role skills, wherein the role skills refer to experience and ability of the virtual role to effectively and convincingly play roles in the application running process, such as releasing skills of certain attack and defending. The character skills are related to the game logic of the specific application and also to the virtual characters, and different virtual characters may have different character skills. The skill application action refers to the posture action of the virtual character when the skill of the corresponding character is applied, such as the posture action which is resisted when the defense skill is released.
Specifically, the terminal displays the virtual character in the adjusted character preview area, and the user can trigger a skill preview operation on the character skills in the character information to preview the character skills of the virtual character and check the action image of the virtual character when releasing each character skill. And the terminal responds to a skill preview operation triggered by the user aiming at the character skill in the character information, determines a skill applying action matched with the target skill selected by the skill preview operation, and displays the skill applying action through the virtual character in the adjusted character preview area so that the user can check the image of the applying action of the virtual character when the corresponding target skill is applied. Further, in the process of displaying the action of the virtual character, the terminal determines whether the skill releasing action displayed by the virtual character is in an effective action range for keeping a focus watching state, if so, in the process of displaying the action of the virtual character, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focal position of the lens, namely, the eyes of the virtual character are kept looking at the focal position of the lens, and the effect of keeping a direct vision state with a user is achieved. The effective action range refers to an action range for keeping a focus watching state, and when the action displayed by the virtual character is within the effective action range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focus position of the lens. The effective action range can be flexibly set according to actual needs, for example, the effective action range can be determined according to the action amplitude of the action shown by the virtual character, when the action amplitude of the action shown by the virtual character is smaller, the effective action range can be considered to belong to, and when the action amplitude of the action shown by the virtual character is larger, the effective action range is not considered to belong to.
In the embodiment, the role information displayed on the role preview interface comprises role skills corresponding to a virtual role, a user can trigger skill preview operation aiming at the role skills to check dynamic changes when the virtual role releases various role skills, a terminal responds to the skill preview operation triggered by the user, displays skill releasing action matched with target skills selected by the skill preview operation through the virtual role, and keeps a watching sight line indicated by eyes of the virtual role to point to a lens focus position when the skill releasing action displayed by the virtual role is in an effective action range for keeping a focus watching state in the process of displaying the skill releasing action of the virtual role, so that the effect of keeping the direct-view state with the user is realized, the interactive experience of the virtual role and the user is enhanced, and the dynamic changes which are still kept direct-view with the user when the skill releasing action displayed by the virtual role can be displayed, the dynamic change of the virtual role is effectively displayed, the information quantity displayed by the virtual role is enhanced, and the user can fully know the virtual role.
In one embodiment, the virtual character interaction method further comprises: and displaying the preview background on the character preview interface.
The preview background refers to a background picture in a character preview interface, and can be flexibly set according to actual needs, and specifically, preview backgrounds of different scenes can be set. The preview background can also correspond to the virtual character, that is, different virtual characters can be correspondingly provided with different preview backgrounds, so that the character image of the virtual character can be fully displayed in different preview backgrounds. Specifically, the terminal also displays the preview background on the character preview interface, and other elements in the character preview interface, such as a preview operation area, a character preview area, and the like, can be taken as the foreground of the preview background and are displayed in an overlapping manner above the preview background.
Further, in a role preview area of the role preview interface, displaying a virtual role selected by the role preview trigger operation, including: and displaying a preview gesture of the virtual character in a character preview area in the character preview interface, wherein the preview gesture is matched with the preview background.
The preview posture is a state of the virtual character in preview, and specifically includes various postures of standing, breathing, calling and the like. The preview gesture can be configured for each virtual role according to actual needs. In the embodiment, the preview posture of the virtual character is matched with the preview background, so that the virtual character can be more really integrated into the preview background, and the display effect of the virtual character is improved.
Specifically, when a virtual character is displayed in a character preview area of a character preview interface, the terminal determines a background identifier of a preview background displayed on the character preview interface, queries a preview gesture matched with the preview background based on the background identifier, and displays the virtual character in the character preview area in the character preview interface in the preview gesture matched with the preview background after the preview gesture is determined. For example, for the preview background a, in the character preview area in the character preview interface, the virtual character can be displayed in a posture of breathing with one hand standing on the waist; with respect to preview background B, the virtual character may be exposed in a running gesture in the character preview area in the character preview interface.
In a specific application, as shown in fig. 9, the terminal further displays a preview background on the character preview interface, and displays a virtual character fly above the preview background in the character preview area, where the virtual character fly is displayed in a preview posture matched with the preview background.
In this embodiment, the terminal further displays a preview background on the character preview interface, and when the virtual character is displayed in the character preview area above the preview background, the virtual character is displayed in a preview posture matched with the preview background, so that the image of the virtual character displayed under different preview backgrounds in the character preview interface is enriched, richer character information of the virtual character is displayed, and the user can know the virtual character conveniently.
In one embodiment, the virtual character interaction method further comprises: and displaying the interaction intimacy between the current account and the virtual character in the character preview interface.
The interaction intimacy is a quantitative value of interaction data of the current account and the virtual character and is used for representing the interaction degree of the current account and the virtual character, and generally, the higher the interaction intimacy is, the more or more frequent the interaction between the current account and the virtual character is. The current account can be an account of a user logging in the application client, so that the interaction intimacy reflects the interaction degree of the user and the virtual character.
Specifically, the terminal also displays the interaction affinity of the current account and the virtual character in the character preview interface, for example, the interaction affinity of the current account and the virtual character can be displayed in a numerical form in an affinity area associated with the virtual character, so that the interaction degree between the user and the virtual character is visually represented.
Further, in a role preview area of the role preview interface, displaying a virtual role selected by the role preview trigger operation, including: and displaying the preview gesture of the virtual character in a character preview area in the character preview interface, wherein the preview gesture is matched with the interaction intimacy.
The preview posture is a state of the virtual character in preview, and specifically includes various postures of standing, breathing, calling and the like. The preview gesture can be configured for each virtual role according to actual needs. In the embodiment, the preview posture of the virtual character is matched with the interaction intimacy, so that the interaction degree with the user can be shown through the preview posture of the virtual character, the information quantity displayed by the virtual character is enhanced, and the display effect of the virtual character is improved.
Specifically, when a virtual character is displayed in a character preview interface, a terminal inquires the interaction intimacy between a current account number logged in by a user and the virtual character, the interaction intimacy is displayed in the character preview interface, the terminal inquires a preview gesture matched with the interaction intimacy, and after the preview gesture is determined, the terminal displays the virtual character in a character preview area in the character preview interface in the preview gesture matched with the interaction intimacy. For example, for the interaction affinity 20, in the character preview area in the character preview interface, the virtual character can be shown in a posture of breathing with one hand standing on the waist; with respect to interaction affinity 80, the virtual character may be presented in an enlisting gesture in a character preview area in a character preview interface.
In the embodiment, the terminal also displays the interaction affinity of the current account and the virtual character on the character preview interface, and displays the virtual character in the preview posture matched with the interaction affinity when the virtual character is displayed in the character preview area, so that the image of the displayed virtual character is enriched under the condition of different interaction affinities in the character preview interface, the richer character information of the virtual character is displayed, and the user can know the virtual character conveniently.
In one embodiment, the virtual character interaction method further comprises: in the process of changing the state of the virtual character, when the state of the virtual character is out of the effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character points to the focus position corresponding to the state of the virtual character; and after the virtual character finishes changing the state, when the state of the virtual character is in the effective state range for keeping the focus watching state, enabling the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens.
The effective state range refers to a state range for keeping a focus watching state, and when the state of the virtual character is in the effective state range, the virtual character keeps the focus watching state, namely, the eyes of the virtual character keep looking at the focal position of the lens. In the process of changing the state of the virtual character, if the state of the virtual character is out of the effective state range, the eyes of the virtual character can be enabled to look at the focal position corresponding to the current state, such as the eyes can look at the front of the virtual character, after the state is changed, if the state of the virtual character is in the effective state range keeping the focal fixation state, the fixation sight line indicated by the eyes of the virtual character points to the focal position of the lens, and therefore the focal position indicated by the fixation sight line indicated by the eyes of the virtual character can be flexibly adjusted according to the state of the virtual character.
Specifically, in the process of changing the state of the virtual character, if the terminal determines that the state of the virtual character is out of the effective state range for maintaining the focus watching state, the terminal directs the watching sight line indicated by the eyes of the virtual character to the focus position corresponding to the state of the virtual character. Further, after the virtual character finishes changing the state, whether the state of the virtual character is in the effective state range for keeping the focus watching state is determined, and if yes, the watching sight line indicated by the eyes of the virtual character is directed to the focus position of the lens. For example, when the virtual character shows an action, in the complete action flow, there may be partial phases of actions that do not belong to the effective action range, and the gaze line indicated by the eyes of the virtual character may point to the focal position corresponding to the state where the virtual character is located, such as looking straight ahead; and when the displayed action of the virtual character belongs to the effective action range or the action is finished and the virtual character is restored to the preview state, the watching sight line indicated by the eyes of the virtual character points to the focal position of the lens, for example, the eyes of the virtual character return to the focal position of the lens.
In this embodiment, in the process of changing the state of the virtual character, if the state of the virtual character is outside the effective state range, the terminal may enable the eyes of the virtual character to look at the focal position corresponding to the current state, for example, the eye may look right ahead of the virtual character, and after the change of the state is completed, if the state of the virtual character is within the effective state range in which the focal gaze state is maintained, the terminal may enable the gaze line indicated by the eyes of the virtual character to point at the focal position of the lens, so that the focal position indicated by the gaze line indicated by the eyes of the virtual character may be flexibly adjusted according to the state of the virtual character, thereby effectively displaying the dynamic change of the virtual character, enhancing the information amount displayed by the virtual character, and facilitating a user to fully know the virtual character.
In one embodiment, the virtual character interaction method further comprises: determining a role identifier of a virtual role; acquiring role resources corresponding to the virtual roles based on the role identifiers; and rendering the role resources to display the virtual roles.
The role identifier may be information for identifying each virtual role, such as a role name, a role code, a role number, and the like of the virtual role. Different virtual roles correspond to different role identifications, and the only corresponding virtual role can be determined through the role identifications, namely the virtual roles and the role identifications have one-to-one correspondence. The role resources are resource information required for rendering the virtual role, and describe various information of the virtual role during display, such as various information of appearance, volume, color and the like. The role resources can be correspondingly constructed for each virtual role, and the virtual roles meeting the role requirements can be displayed in the terminal by rendering the role resources.
Specifically, when the terminal displays the virtual character, the terminal determines a character identifier of the virtual character to be displayed, and after the character identifier is determined, the terminal acquires a character resource corresponding to the virtual character based on the character identifier, and specifically, the terminal acquires the character resource corresponding to the virtual character from a character resource library according to the inquiry of the character identifier. And after the role resources of the virtual roles to be displayed are obtained, the terminal performs rendering processing on the role resources, so that the virtual roles are displayed in the terminal.
In the embodiment, the corresponding role resource is obtained based on the role identification of the virtual role to be displayed, the corresponding role resource is rendered, and the virtual role is displayed at the terminal, so that the rendering display of the virtual role is realized, the eye of the displayed virtual role presents the focus watching state, and the watching sight line indicated by the eye of the virtual role points to the lens focus position, so that the image of the virtual role can be displayed at the terminal, and the user can fully know the virtual role.
In an embodiment, as shown in fig. 10, before obtaining role resources corresponding to virtual roles based on role identifiers, the method further includes a process of constructing role resources, where the process of constructing role resources may be implemented by a terminal or a server, and specifically includes:
step 1002, a character animation template corresponding to the virtual character is obtained.
The role animation template can be an animation template which can be used by virtual roles in batch, specifically can be a Prefab file, and different display states of the virtual roles can be constructed based on the role animation template. Specifically, when the process of constructing the character resource is implemented by the server, the server may obtain a character animation template corresponding to the virtual character.
Step 1004, generate an animation state machine associated with the character animation template.
The animation state machine is used for managing and maintaining various animations generated based on the character animation template, and switching of virtual characters among different states can be achieved by switching the animation state machine, so that animation transformation effects are achieved. Specifically, the server generates an animation state machine associated with the character animation template, and specifically, parameters of each animation state can be configured, so that the animation state machine is constructed according to the character animation template.
Step 1006, obtaining the state configuration information according to the valid state range corresponding to the virtual role.
The state configuration information is information for configuring an effective state range corresponding to the virtual role, such as configuring a parameter corresponding to the effective state range. For example, when the valid state range is the valid angle range, the state configuration information may include valid angle data of the eyes and the head of the virtual character, so as to facilitate determination of the gaze pointing position of the eyes of the virtual character at each display angle according to the state configuration information.
And step 1008, obtaining the role resources corresponding to the virtual roles according to the role animation templates, the animation state machines and the state configuration information.
And after the character animation template, the animation state machine and the state configuration information are obtained, the server obtains the character resources corresponding to the virtual character based on the character animation template, the animation state machine and the state configuration information. Specifically, the animation state machine can be introduced into the character animation template, and configured according to the state configuration information to obtain the character resources corresponding to the virtual characters, and the virtual characters in each display state can be displayed on the terminal through the character resources. Further, after the role resource is constructed by the server, the role resource can be sent to the terminal, so that the terminal performs rendering and display processing on the terminal based on the role resource. In addition, the process of constructing the role resources can also be directly executed at the terminal, and the terminal can directly store the obtained role resources so as to render the corresponding role resources when the virtual role needs to be displayed, thereby realizing the display process of the virtual role.
In the embodiment, the role resources corresponding to the virtual role are generated according to the role animation template, the animation state machine and the state configuration information of the virtual role, so that the role resources of the virtual role are obtained, the rendering display of the virtual role is realized on the basis of the role resources at the terminal, the displayed eyes of the virtual role present the focus watching state, and the watching sight line indicated by the eyes of the virtual role points to the focus position of the lens, so that the image of the virtual role can be displayed at the terminal, and the user can fully know the virtual role.
In one embodiment, the virtual character interaction method further comprises: acquiring a role animation template corresponding to the virtual role; performing gaze sight analysis on the character animation template to obtain a gaze sight pointing position corresponding to the character animation template; and determining the effective state range corresponding to the virtual character according to the gazing sight line pointing position.
The role animation template can be an animation template which can be used by virtual roles in batch, specifically can be a Prefab file, and different display states of the virtual roles can be constructed based on the role animation template. Specifically, the terminal may further analyze the gaze line of the virtual character based on the character animation template to determine a pointing position of the gaze line of the virtual character in each frame of picture of the character animation template, so as to determine an effective state range corresponding to the virtual character.
Specifically, a character animation template corresponding to the virtual character is generated in advance based on art resource means, frames in the character animation template may be different, and frames of the character animation template corresponding to different virtual characters are also different, so that the gazing sight pointing position of the virtual character in the character animation template needs to be analyzed, thereby determining the effective state range corresponding to the virtual character. During specific implementation, after the terminal obtains a character animation template corresponding to a virtual character, the terminal can perform gaze sight analysis on the character animation template, specifically, can analyze the gaze sight of the virtual character corresponding to each frame of picture in the character animation template, determine the gaze sight pointing position of the virtual character, and after the gaze sight pointing position of each frame of picture in the character animation template is obtained, the terminal determines the effective state range corresponding to the virtual character according to the gaze sight pointing position of each frame of picture in the character animation template. For example, the terminal may determine, according to the gaze direction position of each frame of image of the character animation template, each frame of target image whose gaze direction position is the lens focus position, and further determine, based on the time sequence position of each frame of target image, an effective state range for maintaining the focus gaze state, such as an effective angle range for maintaining the focus gaze state.
In the embodiment, the terminal analyzes the gazing sight of the virtual character based on the character animation template to determine the gazing sight pointing position of the virtual character in each frame of picture of the character animation template, so that the effective state range corresponding to the virtual character is determined, the effective state range can be accurately and quickly determined from the character animation template, and the processing efficiency of virtual character interaction is improved.
The application also provides an application scene, and the virtual character interaction method is applied to the application scene.
Specifically, the application of the virtual character interaction method in the application scenario is as follows:
in a battle chess type battle game application, a user player can realize the operation of a character in the game through a terminal. Battle chess games are round-based, role-playing games that battle against a character moving on a map in a grid. Because the battle chess Game is just like Playing chess, the battle chess Game is called a turn battle chess Game, which is formally named as strategic Role Playing, namely strategic RPG (Role-Playing Game), SRPG (SRPG) or S.RPG, and generally supports various platforms, such as PC (personal computer), Mac (machine-based Game) or Ubuntu (unbutting Game) and other multi-end synchronous experiences.
In the battle chess game application, a display interface and a display animation of each three-dimensional virtual character are arranged, the conventional breathing animation of each three-dimensional virtual character can be played on the display interface in the traditional technology, a player can watch the fixed animation of the character at the moment and can also realize 360-degree observation by rotating the character, after the character is rotated, no other interaction exists, and the eyes of the three-dimensional virtual character where the player turns are always directly looking at the front of the character and cannot focus on the player. The player lacks the sense of substitution when spinning the character, just spinning a static model. In this embodiment, the player rotates the role in a certain angle range in the front, the head and eyes of the role keep rotating along with the lens and eyes, and the player looks like that the three-dimensional virtual role is a direct-view player all the time, so that the player can immerse the game more, and the experience is better.
Specifically, as shown in fig. 11, in the present embodiment, a stereoscopic virtual character, which flies away, is displayed in a character preview interface of the terminal, the eyes of the virtual character, which fly away, present a focus gaze state, and a gaze line indicated by the eyes of the virtual character, which flies away, points to a lens focus position, thereby achieving an effect of looking directly at a player. In the character preview interface, information related to the fly is also displayed, including intimacy information about the player, a gift-giving entrance, a character data entrance, a story map entrance, a character talent entrance, a preview background, and the like. Further, as shown in fig. 12, after the user rotates the afo to the right by a certain angle through the rotation control in the character preview interface, the gaze line indicated by the eyes of the afo keeps pointing to the focal position of the lens, that is, the effect of looking directly at the player is kept. As shown in fig. 13, after the user rotates the afo to the left by a certain angle through the rotation control in the character preview interface, the gaze line indicated by the eyes of the afo also keeps pointing to the lens focus position, i.e. the effect of keeping direct view with the player is maintained.
In the embodiment, no matter the lens turns to the left, the middle or the right, the head and the eyes of the stereoscopic virtual character can always look at the front of the lens, namely keep looking at the player, so that the player experience is better. In a specific implementation, besides creating an idle animation of a stereoscopic virtual character, a limit animation from left to right based on the idle is further created, the focus-attention state is maintained within a certain effective angle range, if the angle is 40 ° left and 50 ° right, if the total number of frames of the idle animation of the stereoscopic virtual character is 100 frames, the focus-attention state can be approximately 44% divided by 40, that is, the character needs to maintain the front-view state at the position from the 44 th frame to the 55 th frame during the creation of the animation, that is, the character keeps the direct-view state with the player.
Further, the animation resources of the stereoscopic virtual character are realized based on Unity, which is a cross-platform 2D/3D game engine, and can be used for developing games of various platforms, video games of various game host platforms, games of mobile devices, and the like. In addition, the game platform supported by Unity also extends to an HTML5 webpage platform based on WebGL technology, and a new generation multimedia platform such as tvOS, Oculus Rift, ARKit and the like. Besides being used for researching and developing electronic games, Unity is also a comprehensive authoring tool widely applied to interactive contents of building visualization, real-time three-dimensional animation and the like. In addition, it is also necessary to set a corresponding Animation state machine, namely, Animation Controller, for the stereoscopic virtual character to build a basic display Animation, and the state machine needs a turn Animation and an eye Animation for the character in addition to the basic idle Animation. Wherein the animation state machine is configured to manage and maintain a set of animation clips and associated animation transitions. Generally, a stereoscopic virtual character appears in multiple animations and can switch between them when certain game conditions occur. For example, a switch may be made from a walking animation clip to a jumping animation clip each time a space key is pressed. It is the animation state machine that is used for animation state management and control of the stereoscopic virtual character.
When loading the role resources of the stereoscopic virtual role to render and display the stereoscopic virtual role, the initial state of the animation corresponding to the stereoscopic virtual role may be reset first, and then the role resources of the stereoscopic virtual role are loaded to render and display the corresponding stereoscopic virtual role in the corresponding display scene. Further, each stereoscopic virtual character has different basic states, so that the effective angle range of direct view is kept by rotation from left to right, and when the effective angle range of rotation of each stereoscopic virtual character is determined, animation resources of the stereoscopic virtual character can be analyzed to analyze the orientation of the stereoscopic virtual character in each frame of picture, so that the corresponding effective angle range is determined according to the orientation of the stereoscopic virtual character. In a specific application, for example, in the Unity platform, the configuration of the direct-view Look At may be added under the character animation template prefab of each stereoscopic virtual character, so as to obtain the effective rotation angle range corresponding to each stereoscopic virtual character. For a game component which is already manufactured, for example, any game object gameobject in a scene, can be manufactured into a component template prefab for batch application. For example, what is essentially reused in a scene, such as: enemies, soldiers, weapons, bullets, or any wall body identical to the brick, etc. Prefab acts as a character animation template, resembling a clone, but differing in the location, angle, or some attribute generated.
When the character resources of the stereoscopic virtual character are specifically configured, as shown in fig. 14, the art resources corresponding to the stereoscopic virtual character are drawn in advance, and the finished character art resources are imported into a character resource processing platform, for example, a Unity platform, so as to obtain a character animation template prefab file. In the Unity platform, the form of the stereoscopic virtual character can be previewed based on the prefab file. Fig. 15 is a schematic diagram of an interface for previewing a prefab file corresponding to afar, where afar is held in a state of being held flat on both hands. The animation state machine for generating the stereoscopic virtual character may specifically include animation state machines of various states, such as animation state machines corresponding to the head rotation animation and the eye rotation animation. For the idle animation of each state, the configuration is required to be a loop playing.
Further, in the Unity platform, for the animation state machine of the rolling-head animation, the weight may be set to 1 to ensure the effect of the rolling-head animation, and the Blending parameter Blending is set to additional Additive, i.e. to activate the effect of the rolling-head animation. In the viewer observer panel of the Unity platform, the Motion Time setting is selected as the Head, thereby completing the animation state machine configuration for the swivel animation. Similarly, an animation state machine for the rotating Eye animation is set, and after the animation state machine for the rotating Eye animation is made, the movement Time setting is selected as the Eye in the viewer observer panel, thereby completing the configuration of the animation state machine for the rotating Eye animation.
And after obtaining the animation state machine of the three-dimensional virtual character, importing the animation state machine into a prefab file corresponding to the three-dimensional virtual character to obtain an updated prefab file of the three-dimensional virtual character. In the Unity platform, the animation state machine file may be imported into the animation armator panel of the prefab file. And further performing direct-view configuration on the three-dimensional virtual character to configure the three-dimensional virtual character in a certain effective state range, keeping the position of a focus of a lens, and integrating a character animation template, an animation state machine and direct-view configuration information to obtain the character resource of the three-dimensional virtual character. The state configuration information component may be specifically added, the state configuration information of the direct-view state Look At may be specifically added, and after the parameters and settings of the lens watched by the eyes of the character are confirmed, the character resources corresponding to the stereoscopic virtual character are obtained.
After the role resources of the three-dimensional virtual role are obtained, rendering display can be carried out on a terminal based on the role resources, so that the eyes of the displayed three-dimensional virtual role present a focus watching state, the watching sight line indicated by the eyes of the three-dimensional virtual role points to the focus position of a lens, and when the angle of the three-dimensional virtual role is within the effective angle range for keeping the focus watching state, the head and the eyes of the three-dimensional virtual role are correspondingly adjusted along with the rotation operation of a user, so that the watching sight line indicated by the eyes of the three-dimensional virtual role points to the focus position of the lens in the process of changing the display angle of the three-dimensional virtual role, namely, the purpose of keeping direct vision with a player is realized, the interactive experience of the three-dimensional virtual role and the user is enhanced, and simultaneously, the dynamic change of the three-dimensional virtual role which still keeps direct vision with the user when the three-dimensional virtual role is at different display angles can be displayed, thereby effectively displaying the dynamic change of the three-dimensional virtual role, the information quantity displayed by the three-dimensional virtual character is enhanced, the user can fully know the virtual character, the player can have a better substitution feeling, and the game experience is better.
It should be understood that, although the steps in the flowcharts of fig. 2, 10 and 14 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 10, and 14 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 16, there is provided a virtual character interaction apparatus 1600, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes: a character display module 1602 and an interactive response module 1604, wherein:
a role display module 1602, configured to display virtual roles; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed to a lens focus position;
an interaction response module 1604 for changing the displayed state of the virtual character in response to the interaction operation triggered for the virtual character; when the state of the virtual character is in the effective state range of keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of changing the state of the virtual character.
In one embodiment, the virtual character is a stereoscopic virtual character, and the interaction response module 1604 is further configured to adjust an angle displayed by the stereoscopic virtual character in response to a rotation operation triggered for the stereoscopic virtual character; when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, the watching sight line indicated by the eyes of the stereoscopic virtual character is kept pointing to the lens focus position in the process of adjusting the angle of the stereoscopic virtual character.
In one embodiment, the interactive response module 1604 is further configured to adjust the head orientation of the stereoscopic virtual character during the adjustment of the angle by the stereoscopic virtual character when the displayed angle of the stereoscopic virtual character is within the effective angle range for maintaining the focus gaze state, so that the gaze line indicated by the eyes of the stereoscopic virtual character is kept pointing to the lens focus position.
In one embodiment, the interaction response module 1604 is further configured to adjust the displayed position of the virtual character in response to a display position adjustment operation triggered for the virtual character; when the position displayed by the virtual character is in the effective position range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focus position of the lens in the process of adjusting the position of the virtual character.
In one embodiment, the interaction response module 1604 is further configured to, in response to the limb contact operation being triggered for the virtual character, free the virtual character to exhibit an action matching the limb contact operation; when the motion displayed by the virtual character is in the effective motion range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the motion by the virtual character.
In one embodiment, the character display module 1602 includes a preview interface display module and a character preview module; wherein: the preview interface display module is used for responding to the role preview trigger operation and displaying a role preview interface; and the role preview module is used for displaying the virtual role selected by the role preview triggering operation in the role preview area of the role preview interface.
In one embodiment, the role preview module is further configured to display a departure action of the virtual role selected by the role preview trigger operation in a role preview area in the role preview interface; after the end of the leaving action, the virtual character is displayed in the character preview area in a preview posture.
In one embodiment, the system further comprises an item gifting entry module and an item gifting feedback module; wherein: the item gift inlet module is used for displaying an item gift inlet related to the virtual role in a preview operation area of the role preview interface; the item presentation feedback module is used for responding to item presentation operation triggered by the item presentation inlet and controlling the virtual character to display feedback action aiming at the presented item; when the feedback action displayed by the virtual character is in the effective action range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character is kept pointing to the focus position of the lens in the process of displaying the action by the virtual character.
In one embodiment, the system further comprises a role information entry module and a role information display module; wherein: the role information entry module is used for displaying a role information entry related to the virtual role in a preview operation area of a role preview interface; the role information display module is used for responding to role information triggering operation triggered by the role information inlet and displaying role information corresponding to the virtual role; and adjusting the distribution position of the role preview area in the role preview interface according to the role information, and displaying the virtual role in the adjusted role preview area.
In one embodiment, the skill application preview module is further used for responding to a skill preview operation triggered by the character skill in the character information, and displaying a skill application action matched with the target skill selected by the skill preview operation through the virtual character in the adjusted character preview area; when the skill releasing action displayed by the virtual character is in the effective action range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the action by the virtual character.
In one embodiment, the system further comprises a preview background display module, configured to display a preview background on the character preview interface; and the role preview module is also used for displaying the preview gesture of the virtual role in a role preview area in the role preview interface, and the preview gesture is matched with the preview background.
In one embodiment, the system further comprises an affinity display module, configured to display, in the role preview interface, the interaction affinity of the current account and the virtual role; and the role preview module is also used for displaying the preview gesture of the virtual role in a role preview area in the role preview interface, and the preview gesture is matched with the interaction intimacy.
In one embodiment, the interaction response module 1604 is further configured to, when the state of the virtual character is outside the effective state range for maintaining the focus gaze state during the process of changing the state of the virtual character, direct a gaze line indicated by an eye of the virtual character to a focus position corresponding to the state of the virtual character; and after the virtual character finishes changing the state, when the state of the virtual character is in the effective state range for keeping the focus watching state, enabling the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens.
In one embodiment, the system further comprises a role identification determining module, a role resource obtaining module and a resource rendering module; wherein: the role identification determining module is used for determining the role identification of the virtual role; the role resource acquisition module is used for acquiring role resources corresponding to the virtual roles based on the role identifiers; and the resource rendering module is used for rendering the role resources so as to display the virtual role.
In one embodiment, the system further comprises an animation template acquisition module, a state machine generation module, a configuration information acquisition module and a role resource generation module; wherein: the animation template acquisition module is used for acquiring a role animation template corresponding to the virtual role; the state machine generating module is used for generating an animation state machine associated with the role animation template; the configuration information acquisition module is used for acquiring state configuration information according to the effective state range corresponding to the virtual role; and the role resource generation module is used for obtaining role resources corresponding to the virtual roles according to the role animation template, the animation state machine and the state configuration information.
In one embodiment, the device further comprises an animation template acquisition module, a gaze sight analysis module and an effective state range determination module; wherein: the animation template acquisition module is used for acquiring a role animation template corresponding to the virtual role; the gaze line analysis module is used for carrying out gaze line analysis on the character animation template to obtain a gaze line pointing position corresponding to the character animation template; and the effective state range determining module is used for determining the effective state range corresponding to the virtual character according to the gazing sight pointing position.
For the specific definition of the virtual character interaction device, reference may be made to the above definition of the virtual character interaction method, which is not described herein again. The respective modules in the virtual character interaction apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 17. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a virtual character interaction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

1. A virtual character interaction method, the method comprising:
displaying the virtual character; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed at a lens focus location;
changing the displayed state of the virtual character in response to an interactive operation triggered for the virtual character;
when the state of the virtual character is in an effective state range for keeping a focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the lens focus position in the process of changing the state of the virtual character.
2. The method of claim 1, wherein the virtual character is a stereoscopic virtual character, and wherein changing the displayed state of the virtual character in response to the interaction triggered for the virtual character comprises:
adjusting the displayed angle of the stereoscopic virtual character in response to a rotation operation triggered for the stereoscopic virtual character;
when the state of the virtual character is in the effective state range for maintaining the focus watching state, in the process that the virtual character changes the state, maintaining the watching sight line indicated by the eyes of the virtual character to point to the lens focus position, including:
when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the stereoscopic virtual character to point to the focal position of the lens in the process of adjusting the angle of the stereoscopic virtual character.
3. The method of claim 2, wherein when the angle displayed by the stereoscopic virtual character is within the effective angle range for maintaining the focus-on state, maintaining the gaze line indicated by the eyes of the stereoscopic virtual character pointing to the lens focus position during the adjustment of the angle by the stereoscopic virtual character comprises:
when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, in the process of adjusting the angle of the stereoscopic virtual character, the head direction of the stereoscopic virtual character is adjusted, so that the watching sight line indicated by the eyes of the stereoscopic virtual character is kept pointing to the lens focus position.
4. The method of claim 1, wherein changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character comprises:
adjusting the position displayed by the virtual character in response to a display position adjustment operation triggered for the virtual character;
when the state of the virtual character is in the effective state range for maintaining the focus watching state, in the process that the virtual character changes the state, maintaining the watching sight line indicated by the eyes of the virtual character to point to the lens focus position, including:
when the position displayed by the virtual character is in the effective position range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the lens focus position in the process of adjusting the position of the virtual character.
5. The method of claim 1, wherein changing the displayed state of the virtual character in response to the interactive operation triggered for the virtual character comprises:
in response to the limb contact operation triggered by the virtual character, controlling the virtual character to show the action matched with the limb contact operation;
when the state of the virtual character is in the effective state range for maintaining the focus watching state, in the process that the virtual character changes the state, maintaining the watching sight line indicated by the eyes of the virtual character to point to the lens focus position, including:
when the motion shown by the virtual character is in the effective motion range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of showing the motion of the virtual character.
6. The method of any of claims 1-5, wherein displaying the virtual character comprises:
responding to the role preview triggering operation, and displaying a role preview interface;
and displaying the virtual role selected by the role preview triggering operation in a role preview area of the role preview interface.
7. The method of claim 6, wherein displaying the virtual character selected by the character preview trigger operation in the character preview area in the character preview interface comprises:
displaying the departure action of the virtual role selected by the role preview triggering operation in a role preview area in the role preview interface;
and after the departure action is finished, displaying the virtual character in a preview posture in the character preview area.
8. The method of claim 6, further comprising:
displaying an item gift inlet associated with the virtual character in a preview operation area of the character preview interface;
in response to an item gifting operation triggered on the item gifting inlet, controlling the virtual character to exhibit a feedback action for a gifted item;
when the feedback action displayed by the virtual character is in the effective action range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the action by the virtual character.
9. The method of claim 6, further comprising:
displaying a role information inlet associated with the virtual role in a preview operation area of the role preview interface;
responding to the role information triggering operation triggered by the role information inlet, and displaying the role information corresponding to the virtual role;
and adjusting the distribution position of the role preview area in the role preview interface according to the role information, and displaying the virtual role in the adjusted role preview area.
10. The method of claim 9, further comprising:
in response to a skill preview operation triggered by the character skill in the character information, displaying a skill releasing action matched with the target skill selected by the skill preview operation through the virtual character in the adjusted character preview area;
when the skill applying action displayed by the virtual character is in the effective action range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens in the process of displaying the action by the virtual character.
11. The method of claim 6, further comprising:
displaying a preview background on the role preview interface;
displaying the virtual role selected by the role preview triggering operation in the role preview area of the role preview interface, wherein the displaying comprises:
and displaying the preview posture of the virtual role in a role preview area in the role preview interface, wherein the preview posture is matched with the preview background.
12. The method of claim 6, further comprising:
displaying the interaction intimacy between the current account and the virtual character in the character preview interface;
displaying the virtual role selected by the role preview triggering operation in the role preview area of the role preview interface, wherein the displaying comprises:
and displaying the preview posture of the virtual role in a role preview area in the role preview interface, wherein the preview posture is matched with the interaction intimacy.
13. The method of any one of claims 1 to 12, further comprising:
in the process that the virtual character changes the state, when the state of the virtual character is out of the effective state range for keeping the focus watching state, the watching sight line indicated by the eyes of the virtual character points to the focus position corresponding to the state of the virtual character;
and after the virtual character finishes changing the state, when the state of the virtual character is in an effective state range for keeping the focus watching state, enabling the watching sight line indicated by the eyes of the virtual character to point to the focus position of the lens.
14. The method of claim 1, further comprising:
determining a role identifier of the virtual role;
acquiring role resources corresponding to the virtual roles based on the role identifiers;
and rendering the role resources to display the virtual roles.
15. The method of claim 14, wherein before the obtaining of the role resource corresponding to the virtual role based on the role identifier, the method further comprises:
acquiring a role animation template corresponding to the virtual role;
generating an animation state machine associated with the character animation template;
obtaining state configuration information according to the effective state range corresponding to the virtual role;
and obtaining the role resources corresponding to the virtual roles according to the role animation template, the animation state machine and the state configuration information.
16. The method of claim 1, further comprising:
acquiring a role animation template corresponding to the virtual role;
performing gaze sight analysis on the character animation template to obtain a gaze sight pointing position corresponding to the character animation template;
and determining an effective state range corresponding to the virtual character according to the gazing sight line pointing position.
17. An apparatus for virtual character interaction, the apparatus comprising:
the role display module is used for displaying virtual roles; the eyes of the virtual character present a focus gaze state such that a gaze line indicated by the eyes of the virtual character is directed at a lens focus location;
the interaction response module is used for responding to the interaction operation triggered aiming at the virtual character and changing the displayed state of the virtual character; when the state of the virtual character is in an effective state range for keeping a focus watching state, keeping the watching sight line indicated by the eyes of the virtual character to point to the lens focus position in the process of changing the state of the virtual character.
18. The apparatus of claim 17, wherein the virtual character is a stereoscopic virtual character;
the interaction response module is further used for responding to the rotation operation triggered by the stereoscopic virtual character and adjusting the displayed angle of the stereoscopic virtual character; when the angle displayed by the stereoscopic virtual character is within the effective angle range for keeping the focus watching state, keeping the watching sight line indicated by the eyes of the stereoscopic virtual character to point to the focal position of the lens in the process of adjusting the angle of the stereoscopic virtual character.
19. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 16.
20. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 16.
CN202110704882.XA 2021-06-24 2021-06-24 Virtual character interaction method, device, computer equipment and storage medium Active CN113426110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110704882.XA CN113426110B (en) 2021-06-24 2021-06-24 Virtual character interaction method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110704882.XA CN113426110B (en) 2021-06-24 2021-06-24 Virtual character interaction method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113426110A true CN113426110A (en) 2021-09-24
CN113426110B CN113426110B (en) 2023-11-17

Family

ID=77754049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110704882.XA Active CN113426110B (en) 2021-06-24 2021-06-24 Virtual character interaction method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113426110B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114053722A (en) * 2021-11-12 2022-02-18 北京完美赤金科技有限公司 Method and device for in-game reward interaction, storage medium and computer equipment
CN114201046A (en) * 2021-12-10 2022-03-18 北京字跳网络技术有限公司 Gaze direction optimization method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170322623A1 (en) * 2016-05-05 2017-11-09 Google Inc. Combining gaze input and touch surface input for user interfaces in augmented and/or virtual reality
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane
CN109675307A (en) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 Display control method, device, storage medium, processor and terminal in game
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN111111184A (en) * 2019-12-26 2020-05-08 珠海金山网络游戏科技有限公司 Virtual lens adjusting method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane
US20170322623A1 (en) * 2016-05-05 2017-11-09 Google Inc. Combining gaze input and touch surface input for user interfaces in augmented and/or virtual reality
CN109675307A (en) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 Display control method, device, storage medium, processor and terminal in game
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN111111184A (en) * 2019-12-26 2020-05-08 珠海金山网络游戏科技有限公司 Virtual lens adjusting method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
17173游戏网: "腾讯自研最接近神作的游戏!画面当时国内第一,如今终于推出手游", pages 1 - 11, Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1629241626119098963&wfr=spider&for=pc> *
HISMILE86: "天涯明月刀 人物凝视 (Look At技术)", pages 00 - 00, Retrieved from the Internet <URL:https://www.bilibili.com/video/av757848477/?p=2&spm_id_from=pageDriver> *
新倩女幽魂: "极致截图体验 《新倩女幽魂》全新摄像工具来袭", pages 1 - 7, Retrieved from the Internet <URL:http://news.17173.com/content/11032020/150947453.shtml?utm_source=zaker_rss> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114053722A (en) * 2021-11-12 2022-02-18 北京完美赤金科技有限公司 Method and device for in-game reward interaction, storage medium and computer equipment
CN114201046A (en) * 2021-12-10 2022-03-18 北京字跳网络技术有限公司 Gaze direction optimization method and device, electronic equipment and storage medium
CN114201046B (en) * 2021-12-10 2023-12-01 北京字跳网络技术有限公司 Gaze direction optimization method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113426110B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
WO2021258994A1 (en) Method and apparatus for displaying virtual scene, and device and storage medium
CN111659115B (en) Virtual role control method and device, computer equipment and storage medium
US9839844B2 (en) Sprite strip renderer
WO2022083452A1 (en) Two-dimensional image display method and apparatus for virtual object, and device and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN113426110B (en) Virtual character interaction method, device, computer equipment and storage medium
TWI804208B (en) Method of displaying interface of game settlement, device, equipment, storage medium, and computer program product
JP7436707B2 (en) Information processing method, device, device, medium and computer program in virtual scene
CN113318428B (en) Game display control method, nonvolatile storage medium, and electronic device
CN116437137B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN112148125A (en) AR interaction state control method, device, equipment and storage medium
US11645805B2 (en) Animated faces using texture manipulation
CN112870702B (en) Recommendation method, device and equipment for road resources in virtual scene and storage medium
CN113763568A (en) Augmented reality display processing method, device, equipment and storage medium
CN112891940A (en) Image data processing method and device, storage medium and computer equipment
CN113315963A (en) Augmented reality display method, device, system and storage medium
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN116095356A (en) Method, apparatus, device and storage medium for presenting virtual scene
CN115619484A (en) Method for displaying virtual commodity object, electronic equipment and computer storage medium
CN115624740A (en) Virtual reality equipment, control method, device and system thereof, and interaction system
WO2024146246A1 (en) Interaction processing method and apparatus for virtual scene, electronic device and computer storage medium
US20240241618A1 (en) Interaction method, apparatus, device and medium
CN117504279A (en) Interactive processing method and device in virtual scene, electronic equipment and storage medium
Moon et al. Designing AR game enhancing interactivity between virtual objects and hand for overcoming space limit
CN115607957A (en) Game control method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052800

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant