CN117899453A - Interaction method, interaction device, electronic equipment and readable storage medium - Google Patents

Interaction method, interaction device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117899453A
CN117899453A CN202410123676.3A CN202410123676A CN117899453A CN 117899453 A CN117899453 A CN 117899453A CN 202410123676 A CN202410123676 A CN 202410123676A CN 117899453 A CN117899453 A CN 117899453A
Authority
CN
China
Prior art keywords
interaction
virtual character
gazing
identification
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410123676.3A
Other languages
Chinese (zh)
Inventor
周晓岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410123676.3A priority Critical patent/CN117899453A/en
Publication of CN117899453A publication Critical patent/CN117899453A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method, an interaction device, electronic equipment and a readable storage medium, wherein a first gazing mark is displayed at an associated position of a second virtual character triggering a first gazing event in response to the first gazing event aiming at the first virtual character, and the first gazing mark is used for indicating the second virtual character triggering the first gazing event through the first gazing mark; an interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation directed to the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene. Therefore, the first virtual character can trigger the independent interaction with the second virtual character through the gazing operation in the game, so that the operation step of triggering the interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display device in the game process can be reduced.

Description

Interaction method, interaction device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to an interaction method, an interaction device, an electronic device, and a readable storage medium.
Background
The virtual reality technology is to create a realistic virtual reality effect by technical means. In order to create the immersive characteristics of the virtual display, the virtual reality technology combines sensory characteristics of the human body, such as vision, hearing, touch, etc., to realistically reproduce the real world in a digitized form. Hardware devices commonly used in the present-stage virtual reality can be roughly divided into four types, namely: modeling devices (e.g., 3D scanners), three-dimensional visual reality devices (e.g., head mounted stereoscopic displays), sound devices (e.g., three-dimensional sound systems), and interaction devices (e.g., eye-movements).
As virtual display technology continues to mature, head-mounted stereoscopic displays (i.e., head-mounted devices) are receiving increasing attention. The head-mounted stereoscopic display provides a possibility that a user operates in a wide visible space by utilizing the characteristic that the space of the head-mounted stereoscopic display can be infinitely extended. Through various head display devices, optical signals are sent to eyes, and different effects such as virtual reality VR, augmented reality AR and mixed reality MR can be achieved.
Currently, user interaction with a head mounted stereoscopic display may be accomplished through a handle, eye tracking, or the like, for example, a particular button, application, or list item may be selected with the player's gaze by tracking the player's eye movement data. However, in the game process, since the player can only use the eye movement data to realize the selection of the specific content, if the player wants to communicate with the player belonging to a certain virtual character, the player needs to enter the chat interface by selecting the corresponding construction control by means of the eye movement data, and communicate with other players by opening the chat window in the chat interface, if the player is in the game process at this time, the player cannot communicate with other players in time, and the operation efficiency in the game process is affected.
Disclosure of Invention
Accordingly, an object of the present application is to provide an interaction method, apparatus, electronic device, and readable storage medium, in which a first virtual character can trigger a single interaction with a second virtual character through gaze operation in a game, so that an operation step of triggering an interaction between virtual characters in a game process is simplified, and a response frequency and a data processing amount of a head display device in the game process can be reduced.
The embodiment of the application provides an interaction method, which comprises the steps of providing a graphical user interface through first head display equipment; displaying at least part of game scenes in the graphical user interface; the game scene at least comprises a first virtual character controlled by the first head display equipment; the interaction method comprises the following steps:
Responsive to a first gaze event for the first virtual character, exhibiting a first gaze identity at an associated location of a second virtual character triggering the first gaze event, the second virtual character for triggering the first gaze event being indicated by the first gaze identity, wherein the second virtual character is a second head-up device controlled virtual character, the first gaze event being triggered by a first gaze operation performed by the second head-up device for the first virtual character;
An interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation for the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene.
The embodiment of the application also provides an interaction device, which provides a graphical user interface through the first head display equipment; displaying at least part of game scenes in the graphical user interface; the game scene at least comprises a first virtual character controlled by the first head display equipment; the interaction device comprises:
An identification display module, configured to respond to a first gazing event for the first virtual character, and display a first gazing identification at an associated position of a second virtual character triggering the first gazing event, where the second virtual character is a virtual character controlled by a second head-display device, and the first gazing event is triggered by a first gazing operation performed by the second head-display device for the first virtual character, where the second virtual character is indicated by the first gazing identification to trigger the first gazing event;
And the first interaction triggering module is used for responding to a second gazing operation aiming at the second virtual character, and establishing an interaction channel between the first head display device and the second head display device so as to enable the first virtual character and the second virtual character to perform independent interaction in the game scene.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the interaction method as described above.
Embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the interaction method as described above.
The interaction method, the interaction device, the electronic equipment and the readable storage medium provided by the embodiment of the application are used for responding to a first gazing event aiming at a first virtual character, displaying a first gazing mark at an associated position of a second virtual character triggering the first gazing event, and indicating the second virtual character triggering the first gazing event through the first gazing mark; an interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation directed to the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene. Therefore, the first virtual character can trigger the independent interaction with the second virtual character through the gazing operation in the game, so that the operation step of triggering the interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display device in the game process can be reduced.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an interaction method according to an embodiment of the present application;
FIG. 2 is a diagram of a graphical user interface according to an embodiment of the present application;
FIG. 3 is a second diagram of a graphical user interface according to an embodiment of the present application;
FIG. 4 is a third diagram of a graphical user interface according to an embodiment of the present application;
FIG. 5 is a diagram of a graphical user interface according to an embodiment of the present application;
FIG. 6 is a diagram of a graphical user interface according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an interaction device according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment obtained by a person skilled in the art without making any inventive effort falls within the scope of protection of the present application.
Virtual roles:
Refers to dynamic objects that can be controlled in a game scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. The virtual character is a character that a player controls through an input device, or is an artificial intelligence set in a virtual environment fight by training (ARTIFICIAL INTELLIGENCE, AI), or is a Non-player character set in a game scene fight (Non-PLAYER CHARACTER, NPC). Optionally, the avatar is a avatar that plays in the game scene. Optionally, the number of virtual characters in the game scene fight is preset, or is dynamically determined according to the number of clients joining the fight, which is not limited by the embodiment of the present application. In one possible implementation, a user can control a virtual character to move in the virtual scene, e.g., control the virtual character to run, jump, crawl, etc., as well as control the virtual character to fight other virtual characters using skills, virtual props, etc., provided by the application.
Game picture:
In an optional implementation, the game screen is a display screen corresponding to the virtual scene displayed by the terminal device, and the game screen may include virtual characters such as a game character, an NPC character, and an AI character for executing game logic in the virtual scene.
The interaction method in one embodiment of the present disclosure may be run on the terminal device or the server. The terminal device may be a local terminal device. When the interaction method runs on a server, the interaction method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the interaction method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a head display device. Taking a game as an example, the head display device stores a game program and is used to present a game screen. The head display device is used for interacting with a player through a graphical user interface, namely, conventionally downloading and installing a game program through the electronic device and running. The means by which the head-mounted device provides the graphical user interface to the player may include a variety of means, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the head-display device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
It has been found that at present, user interaction with a head mounted stereoscopic display may be achieved by means of a handle, eye tracking, etc., for example, a particular button, application or list item may be selected with the player's gaze by tracking the player's eye movement data. However, in the game process, since the player can only use the eye movement data to realize the selection of the specific content, if the player wants to communicate with the player belonging to a certain virtual character, the player needs to enter the chat interface by selecting the corresponding construction control by means of the eye movement data, and communicate with other players by opening the chat window in the chat interface, if the player is in the game process at this time, the player cannot communicate with other players in time, and the operation efficiency in the game process is affected.
Based on the above, the embodiment of the application provides an interaction method, which can respond to the gazing operation of the first virtual character to trigger the independent interaction between the first virtual character and the second virtual character, so that the step of triggering the interaction between the virtual characters in the game process is simplified, unnecessary operations executed for triggering communication between players are reduced, and the response frequency and the data processing capacity of the head display device are reduced.
Referring to fig. 1, fig. 1 is a flowchart of an interaction method according to an embodiment of the application. Providing a graphical user interface through the head display device; displaying at least part of game scenes in the graphical user interface; the game scene comprises a first virtual character controlled by the first head display device; as shown in fig. 1, the interaction method provided by the embodiment of the present application includes:
S101, in response to a first gaze event for the first virtual character, displaying a first gaze identification at an associated position of a second virtual character triggering the first gaze event, for indicating the second virtual character triggering the first gaze event by the first gaze identification.
S102, an interaction channel is established between the first head display device and the second head display device in response to a second gazing operation aiming at the second virtual character, so that the first virtual character and the second virtual character can interact independently in the game scene.
According to the interaction method provided by the embodiment of the application, on the basis that the second virtual character triggers the first gazing event aiming at the first virtual character, the independent interaction between the first virtual character and the second virtual character can be timely triggered in a game scene in response to the second gazing operation aiming at the second virtual character, so that the triggering step of the independent interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display equipment are reduced; in order to avoid unilateral triggering interaction of players, the game processes of other players are interfered, and an interaction channel is established between the first head display device and the second head display device only under the condition that virtual roles of the players of both sides have interaction will, so that the interaction is prevented from being unilaterally triggered, and the game process of the game player without the interaction will is interrupted; and if the first virtual character does not interact with the second virtual character through messages, the first virtual character can reject to interact with the second virtual character independently without executing any operation, so that the operation required to be executed by the first virtual character for rejecting the second virtual character is reduced, and the response frequency and the data processing capacity of the head display device are further reduced.
Here, the head-display device may provide a graphical user interface to the player, in which at least a portion of a game scene is displayed, which may include a first virtual character, NPC, and scene elements (e.g., virtual houses, trees, etc.) controlled by the first head-display device.
Here, there are also differences in the mentioned game scenario for different virtual reality technologies; when the head display equipment is VR equipment, the game scene is a completely virtual three-dimensional game image constructed by the VR equipment; when the head display equipment is AR equipment, the game scene is a three-dimensional game scene constructed by VR equipment on the basis of a real scene, namely the three-dimensional game scene comprises a real scene and a virtual three-dimensional game image; when the head display device is an MR device, the principle of the head display device is similar to that of an AR device, and the game scene is a three-dimensional game scene constructed by the MR device on the basis of a real scene, namely the three-dimensional game scene comprises a real scene and a virtual three-dimensional game image.
During the game, the head display device can track the eye movement data of the player so as to determine the relevant operation performed by the player in the game by identifying the eye movement data of the player; for example, gaze events triggered by a player controlling a virtual character, gaze operations performed, selected game controls, game operations performed, and the like may be determined by tracking eye movement data of the player.
In step S101, during the game, the head-display device may monitor in real time whether there is a first gazing event for the first virtual character, and if the first gazing event for the first virtual character is monitored, the head-display device may display the first gazing identifier at an associated position of the second virtual character triggering the first gazing event in response to the first gazing event for the first virtual character.
Here, considering that the first virtual character cannot accurately and timely give the corresponding feedback to the second virtual character under the condition of unclear gazing, in order to achieve the purpose of prompting the first virtual character (the player to which the first virtual character belongs), when the second virtual character triggers a first gazing event relative to the first virtual character, the second virtual character can be marked by using the first gazing mark; that is, a second virtual character triggering the first gaze event is indicated by the first gaze identification.
The second virtual character is a virtual character controlled by a second head-display device, and the first gazing event is triggered by a first gazing operation, which is executed by the second head-display device, aiming at the first virtual character.
Here, if the first virtual character has a "wish" to interact with the second virtual character individually on the premise that the second virtual character triggers the first gaze event with respect to the first virtual character, the two parties (i.e., the first virtual character and the second virtual character) may be prompted to perform "mutual vision" by "looking back" the second virtual character, so as to trigger the first virtual character and the second virtual character to perform individual interaction; thus, the head-mounted device may trigger a separate interaction between the first virtual character and the second virtual character by applying a second gaze operation with respect to the second virtual character.
In step S102, in response to the second gaze operation applied by the first virtual character with respect to the second virtual character, an interaction channel is established between the first head-display device controlled by the first virtual character and the second head-display device controlled by the second virtual character, so that the first virtual character and the second virtual character can individually interact in the game scene through the interaction channel.
Here, in order to be able to identify the second virtual character to which the second gazing operation is directed without deviation, considering that the number of virtual characters existing in the game scene is large, the second virtual character to which the second gazing operation is directed can be accurately identified by means of a line-of-sight drop point of the gazing line generated by the eye movement data of the player to which the first virtual character belongs.
The first virtual character and the second virtual character can independently interact in the game scene in different interaction modes; specifically, the first virtual character and the second virtual character can independently interact through a voice interaction mode; or can also carry out independent interaction in a text interaction mode; or can also carry out independent interaction in an expression interaction mode; or can also carry out independent interaction in a mode of friend adding interaction; or can also carry out independent interaction in a limb interaction mode; specifically, a suitable interaction mode may be selected according to the game stage, game habit, etc. of the player, which is not limited herein.
Here, the "interaction channel" refers to a virtual carrier that assists the first virtual character to interact with the second virtual character, and the interaction channel may exist in various forms, for example, may be in the form of "link", "mail", "web page", etc.
Referring to fig. 2, fig. 2 is a schematic diagram of a gui according to an embodiment of the application. As shown in fig. 2, a game scene 2b is displayed in the graphical user interface 2a, where the game scene 2b includes a first virtual character 2c controlled by the first head-display device and a second virtual character 2d controlled by the second head-display device, and when the second virtual character 2d triggers a first gaze event for the first virtual character 2c, a gaze identification 2e is displayed at an associated location of the second virtual character 2d (e.g., around the second virtual character 2 d).
Here, considering that "mutual vision" is formed between virtual characters carelessly in the game process, if the individual interaction between the first virtual character and the second virtual character is triggered immediately under the condition of forming the "mutual vision", there may be a case of "false triggering", so in order to avoid the problem of "false triggering", the "mutual vision" time between the virtual characters needs to be further limited.
In one embodiment, step S102 includes:
in response to a second gaze operation for the second virtual character, an interaction channel is established between the first head-display device and the second head-display device when a gaze duration of the second gaze operation reaches a gaze time threshold.
In this step, in response to a second gaze operation performed for a second virtual character, a gaze duration of the second gaze operation is detected; when the gazing duration of the second gazing operation reaches the gazing time threshold, the first virtual character and the second virtual character can be indicated to have independent interaction requirements, and at the moment, an interaction channel can be established between the first head display device and the second head display device, so that the first virtual character and the second virtual character can independently interact in a game scene through the interaction channel; thus, the first virtual character and the second virtual character are prevented from being interacted in a 'forced' way due to 'false triggering' message interaction.
Here, as the gaze duration of the second gaze operation increases, the closer the first virtual character and the second virtual character are to trigger individual interactions; in order to facilitate the first virtual character to know the trigger progress of the independent interaction, the purpose of prompting the first virtual character can be achieved by changing the display special effect of the fixation mark.
In one embodiment, the interaction method further comprises:
In the process of establishing the interaction channel, according to the gazing duration time of the second gazing operation, the first identification parameter of the first gazing identification is adjusted in real time, and the first gazing identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first gazing identification is changed.
In the step, in the process of establishing an interaction channel for the first virtual character and the second virtual character, a first identification parameter of a first fixation identification can be adjusted in real time according to the fixation duration time of a second fixation operation applied by the first virtual character, and the adjusted first fixation identification is displayed at the relevant position of the second virtual character according to the first identification parameter adjusted in real time, so that the purpose of changing the display special effect of the first fixation identification is achieved; to prompt the triggering progress of the first virtual character (or controlled player) to interact solely with the second virtual character (or controlled player) by changing the displayed effect of the first gaze identification.
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the gaze duration; the brightness and the fill proportion are positively correlated with the gaze duration.
When the first identification parameter comprises transparency, the transparency of the first gazing identification displayed by the adjusted first identification parameter is lower than the transparency of the first gazing identification displayed by the first identification parameter before adjustment; that is, the transparency of the first gaze identification is inversely related to the gaze duration.
And/or the number of the groups of groups,
When the first identification parameter includes a color, the color of the first gaze identification displayed with the adjusted first identification parameter is different from the color of the first gaze identification displayed with the first identification parameter before adjustment; for example, the first gaze identification displayed with the first identification parameter before adjustment is white, then the first gaze identification displayed with the first identification parameter after adjustment is red.
And/or the number of the groups of groups,
When the first identification parameter comprises brightness, the brightness of the first fixation identification displayed by the adjusted first identification parameter is higher than the brightness of the first fixation identification displayed by the first identification parameter before adjustment; that is, the brightness of the first gaze identification is positively correlated with the gaze duration.
And/or the number of the groups of groups,
When the first identification parameter comprises a filling proportion, the filling proportion of the first gazing identification displayed by the adjusted first identification parameter is higher than that of the first gazing identification displayed by the first identification parameter before adjustment; that is, the fill fraction of the first gaze identification is positively correlated with the gaze duration.
Referring to fig. 3, fig. 3 is a second diagram of a graphical user interface according to an embodiment of the application. As shown in fig. 3, a game scene 3b is displayed in the graphical user interface 3a, the game scene 3b includes a first virtual character 3c controlled by a first head-display device and a second virtual character 3d controlled by a second head-display device, and when the second virtual character 3d triggers a first gazing event for the first virtual character 3c, a first gazing identifier 3e is displayed at an associated position of the second virtual character 3 d; establishing an interaction channel between the first head-display device and the second head-display device while performing a second gazing operation with respect to the second virtual character 3 d; and in the process of establishing the interactive channel, as the fixation duration increases, the filling proportion of the first fixation mark 3e (taking the first mark parameter as the filling proportion as an example) is adjusted in real time, and the first filling proportion 3e-1 is adjusted to the second filling proportion 3e-2.
Here, in the process of establishing the interaction channel, if the first virtual character pauses to "watch" the second virtual character, that is, the first virtual character pauses to "watch" the second gazing operation (or the second virtual character pauses to "watch" the first virtual character, that is, the second virtual character pauses to trigger the first gazing event), the first virtual character is regarded as being stopped to trigger the single interaction, and at this time, in order to facilitate the first virtual character to learn that the triggering of the single interaction has been stopped, the purpose of prompting the first virtual character can be achieved by changing the display special effect of the gazing mark.
In one embodiment, the interaction method further comprises:
S21, responding to the first virtual character to stop the second gazing operation in the process of establishing the interaction channel, and counting the stopping duration of the second gazing operation.
S22, according to the suspension duration, a first identification parameter of the first fixation identification is adjusted in real time, and the first fixation identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first fixation identification is changed.
In the step, according to the suspension duration time of the second gazing operation which is suspended by the first virtual character, a first identification parameter of the first gazing identification is regulated in real time, and the regulated first gazing identification is displayed at the association position of the second virtual character according to the first identification parameter regulated in real time, so that the purpose of changing the display special effect of the first gazing identification is achieved; to prompt that the triggering of the individual interaction of the first virtual character (or affiliated player) with the second virtual character (or affiliated player) has been interrupted by changing the displayed special effect of the first gaze identification.
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is positively correlated with the abort duration; the brightness and the fill fraction are inversely related to the abort duration.
When the first identification parameter comprises transparency, the transparency of the first gazing identification displayed by the adjusted first identification parameter is higher than the transparency of the first gazing identification displayed by the first identification parameter before adjustment; that is, the transparency of the first gaze identification is positively correlated with the suspension duration.
And/or the number of the groups of groups,
When the first identification parameter includes a color, the color of the first gaze identification displayed with the adjusted first identification parameter is different from the color of the first gaze identification displayed with the first identification parameter before adjustment; for example, the first gaze identification displayed with the first identification parameter before adjustment is red, and then the first gaze identification displayed with the first identification parameter after adjustment is white.
And/or the number of the groups of groups,
When the first identification parameter comprises brightness, the brightness of the first fixation identification displayed by the adjusted first identification parameter is lower than the brightness of the first fixation identification displayed by the first identification parameter before adjustment; that is, the brightness of the first gaze identification is inversely related to the suspension duration.
And/or the number of the groups of groups,
When the first identification parameter comprises a filling proportion, the filling proportion of the first gazing identification displayed by the adjusted first identification parameter is lower than that of the first gazing identification displayed by the first identification parameter before adjustment; that is, the fill fraction of the first gaze identification is inversely related to the suspension duration.
Here, with the increasing duration of the suspension, when the duration of the suspension is greater than the suspension time threshold, the first virtual character may be regarded as discarding the individual interaction with the second virtual character, and at this time, the establishment of the interaction channel may be terminated.
In one embodiment, the interaction method further comprises: and in response to the suspension duration being greater than a suspension time threshold, terminating establishing an interaction channel between the first head-display device and the second head-display device and canceling display of the first gaze identification at the associated location of the second virtual character.
In this step, in response to the duration of the suspension of the second gaze operation by the first avatar being greater than the suspension time threshold, it may be considered that the first avatar gives up to interact with the second avatar alone, and at this time, the establishment of an interaction channel between the first head-display device and the second head-display device may be terminated, and the display of the first gaze identification may be canceled at the associated location of the second avatar, so as to indicate that both avatars give up to interact alone.
Here, in the process of establishing the interaction channel, the second virtual character may trigger to stop the first gazing event by stopping the first gazing operation; counting a suspension duration of the first gaze event in response to a second virtual character suspending the first gaze event; and according to the suspension duration of the first gazing event, adjusting a first identification parameter of the first gazing identification in real time, and displaying the first gazing identification according to the first identification parameter after the real-time adjustment so as to change the display special effect of the first gazing identification.
If the first virtual character (and/or the second virtual character) temporarily suspends the second gaze operation (and/or the first gaze event) only because of problems such as line of sight occlusion, interference, etc., the adjusted first identification parameter may be reset back to the adjusted first identification parameter after the first virtual character performs the gaze operation on the second virtual character again, so as to continue displaying the special effect before the adjustment of the first gaze identification.
In one embodiment, the interaction method further comprises:
And in the process of adjusting the first identification parameters in real time according to the suspension duration, resetting the first identification parameters to the first identification parameters before real-time adjustment in response to a third gazing operation aiming at the second virtual character, and displaying the first gazing identification according to the first identification parameters before real-time adjustment.
In the step, in the process of adjusting the first identification parameter in real time according to the suspension duration, if the third gazing operation is applied to the second virtual character again, the first virtual character and the second virtual character can be promoted to perform 'mutual vision' in the game scene again; at this time, in response to the third gazing operation for the second virtual character, the first identification parameter of the first gazing identification is "reset", the adjusted first identification parameter is reset to the first identification parameter before the real-time adjustment, and the first gazing identification is displayed at the associated position of the second virtual character according to the first identification parameter before the real-time adjustment.
Here, the first virtual character may "actively" request individual interactions with other virtual characters in addition to triggering individual interactions with the second virtual character in a "passive" manner during the game.
In one embodiment, the interaction method further comprises:
And S201, responding to a fourth gazing operation aiming at a third virtual role, and displaying a prompt identifier in the graphical user interface, wherein the prompt identifier is used for prompting that the fourth gazing operation is effective.
In the step, in response to the fourth gazing operation applied to the third virtual character, a prompt identifier is displayed in a graphical user interface provided by the first display device, so that the (affiliated player of the) first virtual character is prompted to have effective fourth gazing operation through the prompt identifier, and the (affiliated player of the) first virtual character is prevented from repeatedly applying fourth gazing operation through the first display device on the premise of unclear effective condition, so that data processing pressure is caused to the first display device.
S202, during the period that the fourth gazing operation is effective, in response to a second gazing event for the first virtual character, displaying a second gazing identifier at an associated position of the third virtual character triggering the second gazing event, and establishing an interaction channel between the first head display device and the third head display device, so that the first virtual character and the third virtual character interact independently in the game scene.
In the step, if the first head-display device monitors the second gazing event aiming at the first virtual character during the effective period of the fourth gazing operation, the first head-display device can respond to the second gazing event aiming at the first virtual character and display the second gazing identification at the associated position of the third virtual character triggering the second gazing event; and an interaction channel is established between the first head display device controlled by the first virtual character and the third head display device controlled by the third virtual character, so that the first virtual character and the third virtual character can independently interact in the game scene through the interaction channel.
Here, the second gaze identification is used to "mark" the third virtual character; that is, a third virtual character triggering a third gaze event is indicated by the second gaze identification.
Wherein the third virtual character is a virtual character controlled by the third head-display device, and the second gazing event is triggered by a fifth gazing operation performed by the third head-display device and directed to the first virtual character.
In one embodiment, the interaction method further comprises:
And in the process of establishing the interaction channel, according to the triggering duration of the second gazing event, adjusting the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification in real time, and displaying the second gazing identification and the prompt identification according to the second identification parameter after real-time adjustment and the third identification parameter after real-time adjustment so as to change the display special effects of the second gazing identification and the prompt identification.
In the step, in the process of establishing an interactive channel for the first virtual character and the third virtual character, the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification can be adjusted in real time according to the triggering duration of the second gazing event triggered by the third virtual character, and the adjusted second gazing identification is displayed at the relevant position of the third virtual character according to the second identification parameter after the real-time adjustment, so that the purpose of changing the display special effect of the second gazing identification is achieved;
Meanwhile, according to the third identification parameter adjusted in real time, displaying the adjusted prompt identification at the associated position of the first virtual character, so as to achieve the purpose of changing the display special effect of the prompt identification; to prompt the triggering progress of the first virtual character (or affiliated player) to interact with the third virtual character (or affiliated player) alone by changing the display effect of the second gaze identification and the prompt identification.
Wherein the second identification parameter and the third identification parameter include one or more of transparency, brightness, color, and fill proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the trigger duration; the brightness and the fill proportion are positively correlated with the trigger duration.
Because the player can select more interaction modes for single interaction in the game, in order to avoid that the player temporarily changes the interaction modes in the single interaction process to influence the interaction process of the player, the response frequency of the head display device is increased, and when the first virtual character and the second virtual character are allowed to perform single interaction in the game scene, an interaction trigger interface is triggered and displayed, so that the player can complete the selection of the interaction modes by means of the interaction trigger interface, the player can realize single interaction in the interaction mode which is most suitable for the current game process, the operation required by the player to change the interaction modes is reduced, and the response frequency of the head display device is reduced.
In one embodiment, after establishing the interaction channel, the interaction method further comprises:
s1, displaying an interaction triggering interface in the graphical user interface.
In this step, after the interactive channel is established, in order to facilitate the player to select the interactive mode of message interaction, an interactive trigger interface is displayed in the graphical user interface, and the player can complete the selection of the interactive mode by means of the interactive trigger interface.
The interactive triggering interface comprises a plurality of mode selection areas; the player can select the region by selecting the mode to complete the selection of the interaction mode.
S2, responding to a mode selection operation executed in the interaction trigger interface, and determining a target selection area selected by the mode selection operation.
In the step, a player can select a mode selection area presented in an interactive trigger interface through mode selection operation, so that the selection of an interactive mode is realized; specifically, in response to a mode selection operation performed by the player in the interactive trigger interface, it is determined that the player selects a target selection area selected in the interactive trigger interface by the mode selection operation.
In the scheme provided by the application, a player can select an area in a clicking mode and trigger an execution mode selection operation; specifically, the user can select the region in a clicking manner through a finger, a mouse, a preset combination key and the like; the area can be selected by clicking the keys such as an L1 key, an R1 key, a ctrl key, an alt key, an a key and the like in the handle, and the preset keys can be set manually according to the requirements of players; or the player can select the area by clicking the mode through sound control and presetting a mode of triggering actions.
After the player finishes selecting the target selection area, the first head display device can control the first virtual character and the second virtual character to perform independent interaction in the game scene according to the interaction mode associated with the target selection area.
S3, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the target selection area.
In this step, individual interactions between players in the game may be reflected in the graphical user interface as individual interactions between the controlled virtual characters; that is, in the course of the individual interaction of the first player with the second player during the game, it may actually be reflected as an individual interaction between the first virtual character and the second virtual character; thus, after the selection of the target selection area is completed, the first virtual character is controlled to interact with the second virtual character independently in the game scene according to the interaction mode associated with the target selection area.
For example, taking the interaction mode associated with the target selection area as a voice interaction example, controlling the first virtual character to interact with the second virtual character independently in the game scene in a voice interaction mode; taking the interaction mode associated with the target selection area as an example, the first virtual character is controlled to interact with the second virtual character independently in the game scene in a mode of sending the expression.
Referring to fig. 4, fig. 4 is a third diagram of a graphical user interface according to an embodiment of the application. As shown in fig. 4, a game scene 4b is displayed in the graphical user interface 4a, the game scene 4b includes a first virtual character 4c controlled by a first head display device and a second virtual character 4d controlled by a second head display device, after the establishment of an interaction channel between the first head display device and the second head display device is completed, an interaction trigger interface 4e is displayed in the graphical user interface 4a, the interaction trigger interface 4e includes a plurality of mode selection areas 4f, and a player can select the mode selection areas 4f in the interaction trigger interface 4e through mode selection operation so as to realize the selection of an interaction mode.
Here, in order to avoid shielding the graphical user interface of the player, the player may be allowed to complete the selection of the interaction mode through the gesture action without displaying the interaction trigger interface.
In one embodiment, after establishing an interaction channel between the first head display device and the second head display device, the interaction method further includes:
and in response to the detection of the first gesture, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the first gesture.
In the step, after allowing the first virtual character and the second virtual character to perform independent interaction in the game through the interaction channel, the player of the first virtual character can complete the selection of the interaction mode by executing a specific gesture action; specifically, in response to the head display device detecting a first gesture performed by the player to which the first virtual character belongs, the first virtual character is controlled to interact with the second virtual character independently in the game scene according to an interaction mode associated with the first gesture.
The interaction mode corresponding to the gesture action is preset, and the interaction mode corresponding to the gesture action can be properly displayed in the graphical user interface so as to prompt the player, and the situation that the player cannot select the expected interaction mode due to the fact that the gesture action is wrong is avoided.
Here, the interaction auxiliary line may be displayed during the process of the first virtual character and the second virtual character to prompt that the first virtual character is currently performing the separate interaction with the second virtual character.
In one embodiment, after establishing an interaction channel between the first head display device and the second head display device, the interaction method further includes: an interaction assistance line is displayed between the first virtual character and the second virtual character to prompt, through the interaction assistance line, that the first virtual character is currently interacting alone with the second virtual character.
In the step, in the process of independently interacting the first virtual character with the second virtual character, the player to which the first virtual character belongs is prompted to be independent of the player to which the second virtual character belongs by displaying an interaction auxiliary line; specifically, an interaction auxiliary line is displayed between the first virtual character and the second virtual character, so that the first virtual character is prompted to independently interact with the second virtual character through the interaction auxiliary line.
Referring to fig. 5, fig. 5 is a diagram illustrating a graphical user interface according to an embodiment of the application. As shown in fig. 5, a game scene 5b is displayed in the graphical user interface 5a, where the game scene 5b includes a first virtual character 5c controlled by a first head display device and a second virtual character 5d controlled by a second head display device, and an interaction auxiliary line 5e is displayed between the first virtual character 5c and the second virtual character 5d during the process of performing the individual interaction between the first virtual character and the second virtual character, so as to prompt that the first virtual character 5c is currently performing the individual interaction with the second virtual character 5d through the interaction auxiliary line 5 e.
Here, after the first virtual character and the second virtual character complete the separate interaction, the separate interaction may be ended by means of the first gaze identification, but considering that the player may inadvertently gaze the gaze identification under the condition of being disturbed, if the separate interaction between the first virtual character and the second virtual character is ended immediately at the moment that the player gazes the gaze identification, the problem that the player does not have ending will but the interaction between the virtual characters is forcedly ended may exist, so in order to avoid the problem of forcedly ending the interaction, when the player gazes the first gaze identification, a termination interaction control is additionally provided for the player, and the player may immediately end the separate interaction by triggering the termination interaction control under the condition that the player has ending will.
In one embodiment, the interaction method further comprises:
And S6, responding to a sixth gazing operation applied to the first gazing mark, and displaying a termination interaction control in the graphical user interface.
In this step, the first head-display device may display a termination interaction control in the graphical user interface in response to a sixth gaze operation applied for the first gaze identification; that is, when the player to whom the first virtual character belongs gazes at the first gaze identification, a termination interaction control may be displayed in a graphical user interface provided by the first head-display device, such that the player to whom the first virtual character belongs may end individual interactions between the first virtual character and the second virtual character by means of the termination interaction control.
And S7, responding to a first triggering operation aiming at the interaction termination control, and ending the independent interaction between the first virtual role and the second virtual role.
In the step, a player to which the first virtual character belongs can apply a first trigger operation aiming at the termination interaction control to finish the independent interaction between the first virtual character and the second virtual character; specifically, in response to a first trigger operation applied by a player to which the first virtual character belongs through the first head-display device for terminating the interaction control, the individual interaction between the first virtual character and the second virtual character is immediately ended.
In the scheme provided by the application, the player can trigger and execute the first triggering operation in a mode of clicking the termination interaction control; specifically, the player can click to terminate the interaction control through fingers, a mouse, preset combination buttons and the like; the interactive control can be further terminated by clicking keys such as an L1 key, an R1 key, a ctrl key, an alt key, an a key and the like in the handle, and the preset keys can be set manually according to the requirements of players; or the player can click to terminate the interaction control by means of sound control and preset trigger action.
Referring to fig. 6, fig. 6 is a schematic diagram of a gui according to an embodiment of the application. As shown in fig. 6, a game scene 6b is displayed in the graphical user interface 6a, the game scene 6b includes a first virtual character 6c controlled by a first head display device and a second virtual character 6d controlled by a second head display device, and a first gazing prompt identifier 6e is displayed at an associated position of the second virtual character 6 d; in response to a sixth gaze operation for the first gaze identification 6e, displaying a termination interaction control 6f in the graphical user interface 6 a; at this point, the individual interactions between the first virtual character and the second virtual character may be immediately ended by terminating the interaction control 6 f.
Here, considering that there are other teams of players of the player to which the first virtual character belongs and the player to which the second virtual character belongs, in order to facilitate the multi-person message interaction between teams of teams, the first virtual character and the second virtual character may interact separately, and virtual characters controlled by other players may be invited to join the interaction.
In one embodiment, the interaction method further comprises: displaying an interaction invitation control in the graphical user interface; and responding to a second triggering operation aiming at the interaction invitation control, and establishing a plurality of interaction channels among a third virtual role, the first virtual role and the second virtual role so as to enable the first virtual role, the second virtual role and the third virtual role to conduct group interaction in the game scene.
In the step, in a state that the first virtual character and the second virtual character perform independent interaction, an interaction invitation control is displayed in a graphical user interface, so that a player to which the first virtual character belongs can invite a third virtual character to be added into an interaction process through the interaction invitation control.
A player to which the first virtual character belongs can apply a second triggering operation aiming at the interaction invitation control to invite a third virtual character to join in the interaction process; specifically, in response to a second trigger operation applied to the interaction invitation control, a plurality of interaction channels are established among the third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene.
And the third virtual role is a virtual role controlled by the third head display device.
In the scheme provided by the application, the player can trigger and execute the second triggering operation in a mode of clicking the interaction invitation control; specifically, the player can click the interaction invitation control through fingers, a mouse, preset combination buttons and the like; the interactive invitation control can be clicked through keys such as an L1 key, an R1 key, a ctrl key, an alt key, an a key and the like in the handle, and the preset keys can be set manually according to player requirements; or the player can click the interaction invitation control by means of sound control and preset trigger action.
In another embodiment, the interaction method further comprises: displaying an interaction permission identification in the graphical user interface; in response to a fourth gaze event identified for the interaction permission, a plurality of interaction channels are established between a third virtual character, the first virtual character, and the second virtual character to cause the first virtual character, the second virtual character, and the third virtual character to conduct group interactions in the game scene.
In the step, when the first virtual character and the second virtual character perform independent interaction, an interaction permission mark is displayed in a graphical user interface, so that a player to which the third virtual character belongs can be added into the interaction process between the first virtual character and the second virtual character by means of the third head display device through looking at the interaction permission mark.
And responding to a fourth gazing event of the player to which the third virtual character belongs for the interaction permission identification, and establishing a plurality of interaction channels among the third virtual character, the first virtual character and the second virtual character so as to enable the first virtual character, the second virtual character and the third virtual character to conduct group interaction in the game scene.
The third virtual character is a virtual character controlled by a third head display device, and the fourth gazing event is triggered by gazing operation which is executed by the third head display device and is aiming at the interaction permission mark.
Here, in order to avoid occlusion of the graphical user interface presented by the first head-mounted device, the invitation of the third virtual character may be completed through a gesture action without displaying the interaction permission identification, the interaction invitation control.
In another embodiment, the interaction method further comprises: and in response to detecting the second gesture, establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene.
In the step, when the first virtual character and the second virtual character perform independent interaction, a player to which the first virtual character belongs can complete invitation of the third virtual character by executing specific gesture actions; specifically, in response to the first display device detecting a second gesture performed by the first player, a plurality of interaction channels are established among the third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene.
The interaction mode corresponding to the gesture action is preset, and the interaction mode corresponding to the gesture action can be properly displayed in the graphical user interface to prompt the player, so that the situation that the player cannot select the expected interaction mode due to the fact that the gesture action is wrong is avoided.
According to the interaction method provided by the embodiment of the application, a first gazing mark is displayed at the association position of a second virtual character triggering the first gazing event in response to the first gazing event aiming at the first virtual character, and the second virtual character triggering the first gazing event is indicated through the first gazing mark; an interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation directed to the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene. Therefore, the first virtual character can trigger the independent interaction with the second virtual character through the gazing operation in the game, so that the operation step of triggering the interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display device in the game process can be reduced.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an interaction device according to an embodiment of the application. Providing a graphical user interface through the first head-display device; displaying at least part of game scenes in the graphical user interface; the game scene at least comprises a first virtual character controlled by the first head display equipment; as shown in fig. 7, the interaction device 700 includes:
An identification display module 710, configured to respond to a first gaze event for the first virtual character, and display a first gaze identification at an associated position of a second virtual character triggering the first gaze event, where the second virtual character is a virtual character controlled by a second head-up display device, and the first gaze event is triggered by a first gaze operation performed by the second head-up display device for the first virtual character, where the second virtual character is indicated by the first gaze identification to trigger the first gaze event;
the first interaction triggering module 720 is configured to establish an interaction channel between the first head display device and the second head display device in response to a second gaze operation for the second virtual character, so that the first virtual character and the second virtual character perform an independent interaction in the game scene.
Further, the interaction mode of the independent interaction comprises one or more of voice interaction, text interaction, expression interaction, team interaction, limb interaction and friend adding interaction.
Further, the interaction triggering module 720 is configured to, when configured to establish an interaction channel between the first head-display device and the second head-display device in response to the second gaze operation for the second virtual character, the interaction triggering module 720 is configured to:
in response to a second gaze operation for the second virtual character, an interaction channel is established between the first head-display device and the second head-display device when a gaze duration of the second gaze operation reaches a gaze time threshold.
Further, the interaction device 700 further includes a first identifier adjustment module (not shown in the figure), where the first identifier adjustment module is configured to:
in the process of establishing the interaction channel, according to the gazing duration time of the second gazing operation, the first identification parameter of the first gazing identification is adjusted in real time, and the first gazing identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first gazing identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the gaze duration; the brightness and the fill proportion are positively correlated with the gaze duration.
Further, the interaction device 700 further includes a second identifier adjustment module (not shown in the figure), where the second identifier adjustment module is configured to:
In the process of establishing the interaction channel, responding to the first virtual character to suspend the second gazing operation, and counting the suspension duration of the second gazing operation;
according to the suspension duration, a first identification parameter of the first fixation identification is adjusted in real time, and the first fixation identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first fixation identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is positively correlated with the abort duration; the brightness and the fill fraction are inversely related to the abort duration.
Further, the interaction device 700 further includes a third identifier adjustment module (not shown in the figure), where the third identifier adjustment module is configured to:
And in response to the suspension duration being greater than a suspension time threshold, terminating establishing an interaction channel between the first head-display device and the second head-display device and canceling display of the first gaze identification at the associated location of the second virtual character.
Further, the interaction device 700 further includes a fourth identifier adjustment module (not shown in the figure), where the fourth identifier adjustment module is configured to:
And in the process of adjusting the first identification parameters in real time according to the suspension duration, resetting the first identification parameters to the first identification parameters before real-time adjustment in response to a third gazing operation aiming at the second virtual character, and displaying the first gazing identification according to the first identification parameters before real-time adjustment.
Further, the interaction device 700 further includes a second interaction triggering module (not shown in the figure), and the second interaction triggering module is configured to:
responsive to a fourth gaze operation directed to a third virtual character, displaying a hint identification in the graphical user interface for prompting, via the hint identification, that the fourth gaze operation has been effected;
during the validation of the fourth gaze operation, in response to a second gaze event for the first virtual character, exhibiting a second gaze identification at an associated location of the third virtual character that triggered the second gaze event, and establishing an interaction channel between the first head-display device and the third head-display device to enable the first virtual character and the third virtual character to interact individually in the game scene;
wherein the third virtual character is a virtual character controlled by the third head-display device, and the second gazing event is triggered by a fifth gazing operation performed by the third head-display device and directed to the first virtual character.
Further, the interaction device 700 further includes a fifth identifier adjustment module (not shown in the figure), where the fifth identifier adjustment module is configured to:
in the process of establishing the interaction channel, according to the triggering duration of the second gazing event, the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification are adjusted in real time, and the second gazing identification and the third identification parameter after the real-time adjustment are displayed according to the second identification parameter after the real-time adjustment, so that the display special effects of the second gazing identification and the prompt identification are changed;
Wherein the second identification parameter and the third identification parameter include one or more of transparency, brightness, color, and fill proportion; the filling proportion is the proportion of the filling area in the gazing mark; the transparency is inversely related to the trigger duration; the brightness and the fill proportion are positively correlated with the trigger duration.
Further, after the interactive channel is established, the interactive device 700 further includes a mode selection module (not shown in the figure), where the mode selection module is configured to:
displaying an interaction triggering interface in the graphical user interface; the interactive triggering interface comprises a plurality of mode selection areas;
Determining a target selection area selected by a mode selection operation in response to the mode selection operation applied in the interaction trigger interface;
And controlling the first virtual character to independently interact with the second virtual character in the game scene according to the interaction mode associated with the target selection area.
Further, after establishing an interaction channel between the first head display device and the second head display device, the interaction apparatus 700 further includes a gesture interaction module (not shown in the figure), where the gesture interaction module is configured to:
and in response to the detection of the first gesture, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the first gesture.
Further, after the interaction channel is established between the first head display device and the second head display device, the interaction device 700 further includes an interaction prompt module (not shown in the figure), where the interaction prompt module is configured to:
An interaction assistance line is displayed between the first virtual character and the second virtual character to prompt, through the interaction assistance line, that the first virtual character is currently interacting alone with the second virtual character.
Further, the interaction device 700 further includes an interaction termination module (not shown in the figure), where the interaction termination module is configured to:
Responsive to a sixth gaze operation being applied for the first gaze identification, displaying a termination interaction control in the graphical user interface;
and responding to a first triggering operation for the termination interaction control, and ending the independent interaction between the first virtual character and the second virtual character.
Further, the interaction device 700 further includes a first interaction invitation module (not shown in the figure), where the first interaction invitation module is configured to:
Displaying an interaction invitation control in the graphical user interface;
Responding to a second triggering operation aiming at the interaction invitation control, and establishing a plurality of interaction channels among a third virtual role, the first virtual role and the second virtual role so as to enable the first virtual role, the second virtual role and the third virtual role to perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
Further, the interaction device 700 further includes a second interaction invitation module (not shown in the figure), where the second interaction invitation module is configured to:
Displaying an interaction permission identification in the graphical user interface;
Establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character in response to a fourth gaze event identified for the interaction permission, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
The third virtual character is a virtual character controlled by a third head display device, and the fourth gazing event is triggered by gazing operation which is executed by the third head display device and is aiming at the interaction permission mark.
Further, the interaction device 700 further includes a third interaction invitation module (not shown in the figure), where the third interaction invitation module is configured to:
In response to detecting a second gesture, establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
According to the interaction device provided by the embodiment of the application, a first gazing mark is displayed at the association position of a second virtual character triggering the first gazing event in response to the first gazing event aiming at the first virtual character, and the interaction device is used for indicating the second virtual character triggering the first gazing event through the first gazing mark; an interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation directed to the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene. Therefore, the first virtual character can trigger the independent interaction with the second virtual character through the gazing operation in the game, so that the operation step of triggering the interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display device in the game process can be reduced.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 8, the electronic device 800 includes a processor 810, a memory 820 and a bus 830, the memory 820 storing machine-readable instructions executable by the processor 810, the processor 810 and the memory 820 communicating via the bus 830 when the electronic device is running an interaction method as in the embodiment, the processor 810 executing the machine-readable instructions, the preamble of the processor 810 method item to perform the steps of:
Responsive to a first gaze event for the first virtual character, exhibiting a first gaze identity at an associated location of a second virtual character triggering the first gaze event, the second virtual character for triggering the first gaze event being indicated by the first gaze identity, wherein the second virtual character is a second head-up device controlled virtual character, the first gaze event being triggered by a first gaze operation performed by the second head-up device for the first virtual character;
An interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation for the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene.
In a possible implementation, the interaction mode of the individual interaction includes one or more of voice interaction, text interaction, expression interaction, team interaction, limb interaction and friend adding interaction.
In a possible embodiment, the processor 810 is configured, when configured to perform a second gaze operation for the second virtual character, to establish an interaction channel between the first head-display device and the second head-display device, specifically:
in response to a second gaze operation for the second virtual character, an interaction channel is established between the first head-display device and the second head-display device when a gaze duration of the second gaze operation reaches a gaze time threshold.
In one possible embodiment, the processor 810 further performs:
in the process of establishing the interaction channel, according to the gazing duration time of the second gazing operation, the first identification parameter of the first gazing identification is adjusted in real time, and the first gazing identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first gazing identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the gaze duration; the brightness and the fill proportion are positively correlated with the gaze duration.
In one possible embodiment, the processor 810 further performs:
In the process of establishing the interaction channel, responding to the first virtual character to suspend the second gazing operation, and counting the suspension duration of the second gazing operation;
according to the suspension duration, a first identification parameter of the first fixation identification is adjusted in real time, and the first fixation identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first fixation identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is positively correlated with the abort duration; the brightness and the fill fraction are inversely related to the abort duration.
In one possible embodiment, the processor 810 further performs:
And in response to the suspension duration being greater than a suspension time threshold, terminating establishing an interaction channel between the first head-display device and the second head-display device and canceling display of the first gaze identification at the associated location of the second virtual character.
In one possible embodiment, the processor 810 further performs:
And in the process of adjusting the first identification parameters in real time according to the suspension duration, resetting the first identification parameters to the first identification parameters before real-time adjustment in response to a third gazing operation aiming at the second virtual character, and displaying the first gazing identification according to the first identification parameters before real-time adjustment.
In one possible embodiment, the processor 810 further performs:
responsive to a fourth gaze operation directed to a third virtual character, displaying a hint identification in the graphical user interface for prompting, via the hint identification, that the fourth gaze operation has been effected;
during the validation of the fourth gaze operation, in response to a second gaze event for the first virtual character, exhibiting a second gaze identification at an associated location of the third virtual character that triggered the second gaze event, and establishing an interaction channel between the first head-display device and the third head-display device to enable the first virtual character and the third virtual character to interact individually in the game scene;
wherein the third virtual character is a virtual character controlled by the third head-display device, and the second gazing event is triggered by a fifth gazing operation performed by the third head-display device and directed to the first virtual character.
In one possible embodiment, the processor 810 further performs:
in the process of establishing the interaction channel, according to the triggering duration of the second gazing event, the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification are adjusted in real time, and the second gazing identification and the third identification parameter after the real-time adjustment are displayed according to the second identification parameter after the real-time adjustment, so that the display special effects of the second gazing identification and the prompt identification are changed;
Wherein the second identification parameter and the third identification parameter include one or more of transparency, brightness, color, and fill proportion; the filling proportion is the proportion of the filling area in the gazing mark; the transparency is inversely related to the trigger duration; the brightness and the fill proportion are positively correlated with the trigger duration.
In one possible embodiment, after establishing the interaction channel, the processor 810 further performs:
displaying an interaction triggering interface in the graphical user interface; the interactive triggering interface comprises a plurality of mode selection areas;
Determining a target selection area selected by a mode selection operation in response to the mode selection operation applied in the interaction trigger interface;
And controlling the first virtual character to independently interact with the second virtual character in the game scene according to the interaction mode associated with the target selection area.
In one possible embodiment, after establishing an interaction channel between the first head-display device and the second head-display device, the processor 810 further performs:
and in response to the detection of the first gesture, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the first gesture.
In one possible embodiment, after establishing an interaction channel between the first head-display device and the second head-display device, the processor 810 further performs:
An interaction assistance line is displayed between the first virtual character and the second virtual character to prompt, through the interaction assistance line, that the first virtual character is currently interacting alone with the second virtual character.
In one possible embodiment, the processor 810 further performs:
Responsive to a sixth gaze operation being applied for the first gaze identification, displaying a termination interaction control in the graphical user interface;
and responding to a first triggering operation for the termination interaction control, and ending the independent interaction between the first virtual character and the second virtual character.
In one possible embodiment, the processor 810 further performs:
Displaying an interaction invitation control in the graphical user interface;
Responding to a second triggering operation aiming at the interaction invitation control, and establishing a plurality of interaction channels among a third virtual role, the first virtual role and the second virtual role so as to enable the first virtual role, the second virtual role and the third virtual role to perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
In one possible embodiment, the processor 810 further performs:
Displaying an interaction permission identification in the graphical user interface;
Establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character in response to a fourth gaze event identified for the interaction permission, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
The third virtual character is a virtual character controlled by a third head display device, and the fourth gazing event is triggered by gazing operation which is executed by the third head display device and is aiming at the interaction permission mark.
In one possible embodiment, the processor 810 further performs:
In response to detecting a second gesture, establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
By the method, the first virtual character can trigger independent interaction with the second virtual character through fixation operation in the game, so that the operation step of triggering interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display equipment in the game process can be reduced; in order to avoid unilateral triggering interaction of players, the game processes of other players are interfered, and an interaction channel is established between the first head display device and the second head display device only under the condition that virtual roles of the players of both sides have interaction will, so that the interaction is prevented from being unilaterally triggered, and the game process of the game player without the interaction will is interrupted; if the first virtual character does not interact with the second virtual character through information, the first virtual character can reject to interact with the second virtual character independently without executing any operation, so that the operation required to be executed by the first virtual character for rejecting the second virtual character is reduced, and the response frequency and the data processing amount of the head display device are further reduced; in addition, considering that the first virtual character cannot accurately and timely give corresponding feedback to the second virtual character under the condition of unclear gazing, in order to achieve the purpose of prompting the first virtual character (the affiliated player), when the second virtual character triggers a first gazing event aiming at the first virtual character, the second virtual character can be marked by using a first gazing mark; in order to facilitate the player to know the trigger progress of the individual interaction, the purpose of prompting the player can be achieved by changing the display special effect of the first fixation mark; in order to avoid that a player temporarily changes an interaction mode in an independent interaction process, the interaction process of the player is influenced, the response frequency of the head display device is increased, and when the first virtual character and the second virtual character are allowed to perform independent interaction in a game scene, an interaction trigger interface is triggered to be displayed, so that the player can complete the selection of the interaction mode by means of the interaction trigger interface, the player can realize independent interaction in the interaction mode which is most suitable for the current game process, the operation required to be executed by the player for changing the interaction mode is reduced, and the response frequency of the head display device is reduced.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is executed by a processor when the computer program is executed by the processor, and the processor executes the following steps:
Responsive to a first gaze event for the first virtual character, exhibiting a first gaze identity at an associated location of a second virtual character triggering the first gaze event, the second virtual character for triggering the first gaze event being indicated by the first gaze identity, wherein the second virtual character is a second head-up device controlled virtual character, the first gaze event being triggered by a first gaze operation performed by the second head-up device for the first virtual character;
An interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation for the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene.
In a possible implementation, the interaction mode of the individual interaction includes one or more of voice interaction, text interaction, expression interaction, team interaction, limb interaction and friend adding interaction.
In a possible embodiment, the processor, when configured to perform a second gaze operation for the second virtual character, is configured to establish an interaction channel between the first head display device and the second head display device, specifically:
in response to a second gaze operation for the second virtual character, an interaction channel is established between the first head-display device and the second head-display device when a gaze duration of the second gaze operation reaches a gaze time threshold.
In one possible embodiment, the processor further performs:
in the process of establishing the interaction channel, according to the gazing duration time of the second gazing operation, the first identification parameter of the first gazing identification is adjusted in real time, and the first gazing identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first gazing identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the gaze duration; the brightness and the fill proportion are positively correlated with the gaze duration.
In one possible embodiment, the processor further performs:
In the process of establishing the interaction channel, responding to the first virtual character to suspend the second gazing operation, and counting the suspension duration of the second gazing operation;
according to the suspension duration, a first identification parameter of the first fixation identification is adjusted in real time, and the first fixation identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first fixation identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is positively correlated with the abort duration; the brightness and the fill fraction are inversely related to the abort duration.
In one possible embodiment, the processor further performs:
And in response to the suspension duration being greater than a suspension time threshold, terminating establishing an interaction channel between the first head-display device and the second head-display device and canceling display of the first gaze identification at the associated location of the second virtual character.
In one possible embodiment, the processor further performs:
And in the process of adjusting the first identification parameters in real time according to the suspension duration, resetting the first identification parameters to the first identification parameters before real-time adjustment in response to a third gazing operation aiming at the second virtual character, and displaying the first gazing identification according to the first identification parameters before real-time adjustment.
In one possible embodiment, the processor further performs:
responsive to a fourth gaze operation directed to a third virtual character, displaying a hint identification in the graphical user interface for prompting, via the hint identification, that the fourth gaze operation has been effected;
during the validation of the fourth gaze operation, in response to a second gaze event for the first virtual character, exhibiting a second gaze identification at an associated location of the third virtual character that triggered the second gaze event, and establishing an interaction channel between the first head-display device and the third head-display device to enable the first virtual character and the third virtual character to interact individually in the game scene;
wherein the third virtual character is a virtual character controlled by the third head-display device, and the second gazing event is triggered by a fifth gazing operation performed by the third head-display device and directed to the first virtual character.
In one possible embodiment, the processor further performs:
in the process of establishing the interaction channel, according to the triggering duration of the second gazing event, the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification are adjusted in real time, and the second gazing identification and the third identification parameter after the real-time adjustment are displayed according to the second identification parameter after the real-time adjustment, so that the display special effects of the second gazing identification and the prompt identification are changed;
Wherein the second identification parameter and the third identification parameter include one or more of transparency, brightness, color, and fill proportion; the filling proportion is the proportion of the filling area in the gazing mark; the transparency is inversely related to the trigger duration; the brightness and the fill proportion are positively correlated with the trigger duration.
In one possible embodiment, after establishing the interaction channel, the processor further performs:
displaying an interaction triggering interface in the graphical user interface; the interactive triggering interface comprises a plurality of mode selection areas;
Determining a target selection area selected by a mode selection operation in response to the mode selection operation applied in the interaction trigger interface;
And controlling the first virtual character to independently interact with the second virtual character in the game scene according to the interaction mode associated with the target selection area.
In one possible embodiment, after establishing an interaction channel between the first head-display device and the second head-display device, the processor further performs:
and in response to the detection of the first gesture, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the first gesture.
In one possible embodiment, after establishing an interaction channel between the first head-display device and the second head-display device, the processor further performs:
An interaction assistance line is displayed between the first virtual character and the second virtual character to prompt, through the interaction assistance line, that the first virtual character is currently interacting alone with the second virtual character.
In one possible embodiment, the processor further performs:
Responsive to a sixth gaze operation being applied for the first gaze identification, displaying a termination interaction control in the graphical user interface;
and responding to a first triggering operation for the termination interaction control, and ending the independent interaction between the first virtual character and the second virtual character.
In one possible embodiment, the processor further performs:
Displaying an interaction invitation control in the graphical user interface;
Responding to a second triggering operation aiming at the interaction invitation control, and establishing a plurality of interaction channels among a third virtual role, the first virtual role and the second virtual role so as to enable the first virtual role, the second virtual role and the third virtual role to perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
In one possible embodiment, the processor further performs:
Displaying an interaction permission identification in the graphical user interface;
Establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character in response to a fourth gaze event identified for the interaction permission, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
The third virtual character is a virtual character controlled by a third head display device, and the fourth gazing event is triggered by gazing operation which is executed by the third head display device and is aiming at the interaction permission mark.
In one possible embodiment, the processor further performs:
In response to detecting a second gesture, establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
By the method, the first virtual character can trigger independent interaction with the second virtual character through fixation operation in the game, so that the operation step of triggering interaction between the virtual characters in the game process is simplified, and the response frequency and the data processing capacity of the head display equipment in the game process can be reduced; in order to avoid unilateral triggering interaction of players, the game processes of other players are interfered, and an interaction channel is established between the first head display device and the second head display device only under the condition that virtual roles of the players of both sides have interaction will, so that the interaction is prevented from being unilaterally triggered, and the game process of the game player without the interaction will is interrupted; if the first virtual character does not interact with the second virtual character through information, the first virtual character can reject to interact with the second virtual character independently without executing any operation, so that the operation required to be executed by the first virtual character for rejecting the second virtual character is reduced, and the response frequency and the data processing amount of the head display device are further reduced; in addition, considering that the first virtual character cannot accurately and timely give corresponding feedback to the second virtual character under the condition of unclear gazing, in order to achieve the purpose of prompting the first virtual character (the affiliated player), when the second virtual character triggers a first gazing event aiming at the first virtual character, the second virtual character can be marked by using a first gazing mark; in order to facilitate the player to know the trigger progress of the individual interaction, the purpose of prompting the player can be achieved by changing the display special effect of the first fixation mark; in order to avoid that a player temporarily changes an interaction mode in an independent interaction process, the interaction process of the player is influenced, the response frequency of the head display device is increased, and when the first virtual character and the second virtual character are allowed to perform independent interaction in a game scene, an interaction trigger interface is triggered to be displayed, so that the player can complete the selection of the interaction mode by means of the interaction trigger interface, the player can realize independent interaction in the interaction mode which is most suitable for the current game process, the operation required to be executed by the player for changing the interaction mode is reduced, and the response frequency of the head display device is reduced.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (19)

1. An interaction method is characterized in that a graphical user interface is provided through a first head-display device; displaying at least part of game scenes in the graphical user interface; the game scene at least comprises a first virtual character controlled by the first head display equipment; the interaction method comprises the following steps:
Responsive to a first gaze event for the first virtual character, exhibiting a first gaze identity at an associated location of a second virtual character triggering the first gaze event, the second virtual character for triggering the first gaze event being indicated by the first gaze identity, wherein the second virtual character is a second head-up device controlled virtual character, the first gaze event being triggered by a first gaze operation performed by the second head-up device for the first virtual character;
An interaction channel is established between the first head-display device and the second head-display device in response to a second gaze operation for the second virtual character, such that the first virtual character and the second virtual character interact individually in the game scene.
2. The method of interaction of claim 1, wherein the individual interaction means comprises one or more of voice interaction, text interaction, expression interaction, team interaction, limb interaction, and friend adding interaction.
3. The interaction method of claim 1, wherein the establishing an interaction channel between the first head-display device and the second head-display device in response to a second gaze operation for the second virtual character comprises:
in response to a second gaze operation for the second virtual character, an interaction channel is established between the first head-display device and the second head-display device when a gaze duration of the second gaze operation reaches a gaze time threshold.
4. An interaction method as claimed in any one of claims 1 to 3, further comprising:
in the process of establishing the interaction channel, according to the gazing duration time of the second gazing operation, the first identification parameter of the first gazing identification is adjusted in real time, and the first gazing identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first gazing identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is inversely related to the gaze duration; the brightness and the fill proportion are positively correlated with the gaze duration.
5. An interaction method as claimed in any one of claims 1 to 3, further comprising:
In the process of establishing the interaction channel, responding to the first virtual character to suspend the second gazing operation, and counting the suspension duration of the second gazing operation;
according to the suspension duration, a first identification parameter of the first fixation identification is adjusted in real time, and the first fixation identification is displayed according to the first identification parameter after the real-time adjustment, so that the display special effect of the first fixation identification is changed;
Wherein the first identification parameter comprises one or more of transparency, brightness, color and filling proportion; the filling proportion is the proportion of the filling area in the first gazing mark; the transparency is positively correlated with the abort duration; the brightness and the fill fraction are inversely related to the abort duration.
6. The method of interaction of claim 5, wherein the method of interaction further comprises:
And in response to the suspension duration being greater than a suspension time threshold, terminating establishing an interaction channel between the first head-display device and the second head-display device and canceling display of the first gaze identification at the associated location of the second virtual character.
7. The method of interaction of claim 5, wherein the method of interaction further comprises:
And in the process of adjusting the first identification parameters in real time according to the suspension duration, resetting the first identification parameters to the first identification parameters before real-time adjustment in response to a third gazing operation aiming at the second virtual character, and displaying the first gazing identification according to the first identification parameters before real-time adjustment.
8. The method of interaction of claim 1, wherein the method of interaction further comprises:
responsive to a fourth gaze operation directed to a third virtual character, displaying a hint identification in the graphical user interface for prompting, via the hint identification, that the fourth gaze operation has been effected;
during the validation of the fourth gaze operation, in response to a second gaze event for the first virtual character, exhibiting a second gaze identification at an associated location of the third virtual character that triggered the second gaze event, and establishing an interaction channel between the first head-display device and the third head-display device to enable the first virtual character and the third virtual character to interact individually in the game scene;
wherein the third virtual character is a virtual character controlled by the third head-display device, and the second gazing event is triggered by a fifth gazing operation performed by the third head-display device and directed to the first virtual character.
9. The method of interaction of claim 8, wherein the method of interaction further comprises:
in the process of establishing the interaction channel, according to the triggering duration of the second gazing event, the second identification parameter of the second gazing identification and the third identification parameter of the prompt identification are adjusted in real time, and the second gazing identification and the third identification parameter after the real-time adjustment are displayed according to the second identification parameter after the real-time adjustment, so that the display special effects of the second gazing identification and the prompt identification are changed;
Wherein the second identification parameter and the third identification parameter include one or more of transparency, brightness, color, and fill proportion; the filling proportion is the proportion of the filling area in the gazing mark; the transparency is inversely related to the trigger duration; the brightness and the fill proportion are positively correlated with the trigger duration.
10. The interaction method of claim 1, wherein after establishing the interaction channel, the interaction method further comprises:
displaying an interaction triggering interface in the graphical user interface; the interactive triggering interface comprises a plurality of mode selection areas;
Determining a target selection area selected by a mode selection operation in response to the mode selection operation applied in the interaction trigger interface;
And controlling the first virtual character to independently interact with the second virtual character in the game scene according to the interaction mode associated with the target selection area.
11. The interaction method of claim 1, wherein after establishing an interaction channel between the first head display device and the second head display device, the interaction method further comprises:
and in response to the detection of the first gesture, controlling the first virtual character to interact with the second virtual character independently in the game scene according to the interaction mode associated with the first gesture.
12. The interaction method of claim 1, wherein after establishing an interaction channel between the first head display device and the second head display device, the interaction method further comprises:
An interaction assistance line is displayed between the first virtual character and the second virtual character to prompt, through the interaction assistance line, that the first virtual character is currently interacting alone with the second virtual character.
13. The method of interaction of claim 1, wherein the method of interaction further comprises:
Responsive to a sixth gaze operation being applied for the first gaze identification, displaying a termination interaction control in the graphical user interface;
and responding to a first triggering operation for the termination interaction control, and ending the independent interaction between the first virtual character and the second virtual character.
14. The method of interaction of claim 1, wherein the method of interaction further comprises:
Displaying an interaction invitation control in the graphical user interface;
Responding to a second triggering operation aiming at the interaction invitation control, and establishing a plurality of interaction channels among a third virtual role, the first virtual role and the second virtual role so as to enable the first virtual role, the second virtual role and the third virtual role to perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
15. The method of interaction of claim 1, wherein the method of interaction further comprises:
Displaying an interaction permission identification in the graphical user interface;
Establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character in response to a fourth gaze event identified for the interaction permission, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
The third virtual character is a virtual character controlled by a third head display device, and the fourth gazing event is triggered by gazing operation which is executed by the third head display device and is aiming at the interaction permission mark.
16. The method of interaction of claim 1, wherein the method of interaction further comprises:
In response to detecting a second gesture, establishing a plurality of interaction channels among a third virtual character, the first virtual character and the second virtual character, so that the first virtual character, the second virtual character and the third virtual character perform group interaction in the game scene;
And the third virtual role is a virtual role controlled by the third head display device.
17. An interactive apparatus characterized by providing a graphical user interface through a first head-display device; displaying at least part of game scenes in the graphical user interface; the game scene at least comprises a first virtual character controlled by the first head display equipment; the interaction device comprises:
An identification display module, configured to respond to a first gazing event for the first virtual character, and display a first gazing identification at an associated position of a second virtual character triggering the first gazing event, where the second virtual character is a virtual character controlled by a second head-display device, and the first gazing event is triggered by a first gazing operation performed by the second head-display device for the first virtual character, where the second virtual character is indicated by the first gazing identification to trigger the first gazing event;
And the first interaction triggering module is used for responding to a second gazing operation aiming at the second virtual character, and establishing an interaction channel between the first head display device and the second head display device so as to enable the first virtual character and the second virtual character to perform independent interaction in the game scene.
18. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via said bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the interaction method of any of claims 1 to 16.
19. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the interaction method of any of claims 1 to 16.
CN202410123676.3A 2024-01-29 2024-01-29 Interaction method, interaction device, electronic equipment and readable storage medium Pending CN117899453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410123676.3A CN117899453A (en) 2024-01-29 2024-01-29 Interaction method, interaction device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410123676.3A CN117899453A (en) 2024-01-29 2024-01-29 Interaction method, interaction device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117899453A true CN117899453A (en) 2024-04-19

Family

ID=90690675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410123676.3A Pending CN117899453A (en) 2024-01-29 2024-01-29 Interaction method, interaction device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117899453A (en)

Similar Documents

Publication Publication Date Title
JP6602393B2 (en) Filtering and parental control methods to limit visual effects on head mounted displays
CN106659932B (en) Sensory stimulus management in head mounted displays
US11998836B2 (en) Game processing system, method of processing game, and storage medium storing program for processing game
US20220410007A1 (en) Virtual character interaction method and apparatus, computer device, and storage medium
JP7249975B2 (en) Method and system for directing user attention to location-based gameplay companion applications
EP4008417A1 (en) Image switching method and apparatus, and device and medium
WO2024026198A1 (en) Reporting and crowd-sourced review whether game activity is appropriate for user
JP2023093647A (en) Game processing system, game processing method, and game processing program
CN117899453A (en) Interaction method, interaction device, electronic equipment and readable storage medium
US11790588B2 (en) Display control device, display control method, and display system
JP6792604B2 (en) Game processing system, game processing method, and game processing program
JP6815373B2 (en) Game processing system, game processing method, and game processing program
CN114053694A (en) Application server, application service method thereof and computer readable storage medium
JP6775870B2 (en) Game processing system, game processing method, and game processing program
US20240115940A1 (en) Text message or app fallback during network failure in a video game
JP7005435B2 (en) Game processing system, game processing method, and game processing program
US20240100440A1 (en) AI Player Model Gameplay Training and Highlight Review
JP6855416B2 (en) Game processing system, game processing method, and game processing program
JP6850769B2 (en) Game processing system, game processing method, and game processing program
US11986731B2 (en) Dynamic adjustment of in-game theme presentation based on context of game activity
US20240033642A1 (en) Systems and methods for hindering play of an adult video game by a child and for protecting the child
JP2021072878A (en) Game processing system, game processing method, and game processing program
CN115317908A (en) Skill display method and device, storage medium and computer equipment
CN118092659A (en) Control movement control method and device, electronic equipment and storage medium
CN117122907A (en) Method and system for providing game re-immersion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination