CN107562201B - Directional interaction method and device, electronic equipment and storage medium - Google Patents

Directional interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN107562201B
CN107562201B CN201710804588.XA CN201710804588A CN107562201B CN 107562201 B CN107562201 B CN 107562201B CN 201710804588 A CN201710804588 A CN 201710804588A CN 107562201 B CN107562201 B CN 107562201B
Authority
CN
China
Prior art keywords
voice message
preset
recording
interactive
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710804588.XA
Other languages
Chinese (zh)
Other versions
CN107562201A (en
Inventor
吴志武
雷月雯
韩志轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201710804588.XA priority Critical patent/CN107562201B/en
Publication of CN107562201A publication Critical patent/CN107562201A/en
Application granted granted Critical
Publication of CN107562201B publication Critical patent/CN107562201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present disclosure provides a directional interaction method, a directional interaction device, an electronic device and a computer readable storage medium, which relate to the field of human-computer interaction, and the method comprises: detecting whether an interactive trigger operation which acts on an input device and enables the relative position between the input device and a preset part of a user to meet a preset condition is received; after the interaction triggering operation is detected, if a recording triggering operation is received, triggering the user to record the voice message and judging whether the recording of the voice message is finished; and after the voice message recording is judged to be finished, determining a target interactive object and sending the voice message to the target interactive object. The method and the device can realize quick trigger of the interaction state and improve the operation efficiency.

Description

Directional interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction, and in particular, to a directional interaction method, a directional interaction apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of mobile communication technology, a large number of vr (virtual reality) gaming applications have emerged. In various game application scenarios, a plurality of players communicate in a lobby by voice or mail.
In the prior art, voice communication is mostly performed in the following two ways: in the first mode, in the game application a of the voice communication mode in the simulated reality as shown in fig. 1, the voice is screened according to the relative position of the game player in VR, that is, the voice of the player relatively far away from the player corresponding to the user is small, and the voice of the player relatively close to the player corresponding to the user is large, so that the voice message sent by the player relatively close to the user is received. In this method, when the number of players within a certain distance range is large, it is impossible to determine a player who wants to receive a transmission voice by a player corresponding to the user.
In the second mode, directional communication is performed between adjacent players by a specific gesture. In the game application B shown in fig. 2, when the relative offset of the helmets of two adjacent players exceeds a certain angle, the adjacent players may have a secret, and at this time, the user cannot hear the voice of other people, and the user cannot hear the content between the two adjacent players. In this way, directional communication can only be achieved between players in adjacent positions, and maintaining head deviation for a long time can cause physical fatigue of the players and a poor user experience.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a directional interaction method, a directional interaction apparatus, an electronic device, and a computer-readable storage medium, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a directional interaction method applied to a terminal capable of presenting a virtual reality scene and including at least a virtual object and an operation interface of an interaction object, including:
detecting whether an interactive trigger operation which acts on an input device and enables the relative position between the input device and a preset part of a user to meet a preset condition is received;
after the interaction triggering operation is detected, if a recording triggering operation is received, triggering the user to record the voice message and judging whether the recording of the voice message is finished;
and after the voice message recording is judged to be finished, determining a target interactive object and sending the voice message to the target interactive object.
In an exemplary embodiment of the present disclosure, the interaction triggering operation includes:
when the input device is a three-degree-of-freedom handle, the pitch angle of the input device is upward, and the included angle between the pitch angle of the input device and the preset position of the user meets a first preset range.
In an exemplary embodiment of the present disclosure, the interaction triggering operation further includes:
and when the input equipment is a six-degree-of-freedom handle, the distance between the input equipment and the preset part of the user meets a second preset range.
In an exemplary embodiment of the present disclosure, the recording trigger operation includes:
and determining a target interactive gesture model in a plurality of interactive gesture models displayed on the operation interface according to the position of the visual center point of the virtual object.
In an exemplary embodiment of the present disclosure, when the input device is a three-degree-of-freedom handle, determining whether recording of the voice message is finished includes:
and when detecting that the included angle between the input equipment and the preset part of the user exceeds the first preset range, judging that the recording of the voice message is finished.
In an exemplary embodiment of the present disclosure, when the input device is a six-degree-of-freedom handle, the determining whether the recording of the voice message is finished includes:
and when the fact that the distance between the input equipment and the preset part of the user exceeds the second preset range is detected, judging that the recording of the voice message is finished.
In an exemplary embodiment of the present disclosure, determining whether the recording of the voice message is finished further includes:
and detecting whether a preset button on the input equipment receives a preset operation or not, and judging that the recording of the voice message is finished when the preset button does not receive the preset operation.
In an exemplary embodiment of the present disclosure, determining a target interactive object and sending the voice message to the target interactive object includes:
judging whether an angle between the current orientation of the virtual object and the current position of the interactive object is larger than a third preset value or not;
and when the angle is smaller than the third preset value, determining the interactive object as a target interactive object and sending the voice message to the target interactive object.
In an exemplary embodiment of the present disclosure, the method further comprises:
and storing the voice message sent by the virtual object to the target interactive object, and adding the identifier corresponding to the virtual object to the interactive list of the target interactive object.
In an exemplary embodiment of the present disclosure, the method further comprises:
and setting a prompt identifier for the target interactive object for sending the voice message by the virtual object.
According to an aspect of the present disclosure, there is provided a directional interaction apparatus applied to a terminal capable of presenting a virtual reality scene and including at least a virtual object and an operation interface of an interaction object, including:
the interactive triggering module is used for detecting whether an interactive triggering operation which acts on an input device and enables the relative position between the input device and a preset part of a user to meet a preset condition is received;
the recording module is used for triggering the user to record the voice message and judging whether the voice message is recorded completely or not if the recording triggering operation is received after the interaction triggering operation is detected;
and the interaction module is used for determining a target interaction object and sending the voice message to the target interaction object after judging that the voice message recording is finished.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a directional interaction method as recited in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any of the directional interaction methods described above via execution of the executable instructions.
In the directional interaction method, the directional interaction apparatus, the electronic device, and the computer-readable storage medium provided in the exemplary embodiments of the present disclosure, on one hand, a user can be quickly triggered to enter an interaction mode through an input device, which improves the speed of activating interaction; on the other hand, additional operation is not needed, so that the user can trigger the interaction mode naturally, and convenience is improved. Furthermore, when a plurality of interactive objects exist, the interactive objects can be accurately locked, and therefore the privacy of interaction between the virtual objects and the interactive objects is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 is a schematic diagram of an operation interface of a game application A in an exemplary embodiment of the disclosure;
FIG. 2 is a schematic diagram of a game application B operating interface in an exemplary embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating a directional interaction method in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates an interactive operation interface diagram in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a target interaction gesture model in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a gamepad rotation angle diagram in an exemplary embodiment of the disclosure;
FIG. 7 is a schematic diagram illustrating three-degree-of-freedom handle recording and the state at the end of the recording in an exemplary embodiment of the disclosure;
FIG. 8 is a schematic diagram illustrating six-degree-of-freedom handle recording and the state at the end of the recording in an exemplary embodiment of the present disclosure;
FIG. 9 is a schematic overall flow chart diagram illustrating a directional interaction method in an exemplary embodiment of the present disclosure;
FIG. 10 schematically illustrates a block diagram of a directional interaction device in an exemplary embodiment of the disclosure;
FIG. 11 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the disclosure;
fig. 12 schematically illustrates a program product in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other methods, components, materials, devices, steps, and so forth. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in the form of software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
The present exemplary embodiment first discloses a directional interaction method, which may be applied to a game scene that may present a virtual reality-like game, and includes at least a virtual object and an operation interface of one or more interactive objects, and the operation interface may be obtained by executing a software application on a processor of a terminal and rendering on a display of the terminal. The virtual reality type game here may be various types of games that include social interactions, such as shooting, puzzle-solving game applications. Besides, the method can also be applied to education, training and other interactive applications including social contact.
In the present exemplary embodiment, a virtual object may be located at an arbitrary position of the operation interface, and the virtual object may be configured to move according to control of an input device. The terminal can be various electronic devices with touch screens, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television and the like. The input device may be connected to the terminal in a wireless manner. Referring to fig. 3, the directional interaction method may include the steps of:
s310, detecting whether an interactive trigger operation which acts on an input device and enables the relative position between the input device and a preset part of a user to meet a preset condition is received;
step S320, after the interaction triggering operation is detected, if a recording triggering operation is received, triggering the user to record the voice message and judging whether the voice message is recorded completely;
step S330, after the voice message recording is judged to be finished, a target interactive object is determined and the voice message is sent to the target interactive object.
According to the directional interaction method in the embodiment, on one hand, the user can be rapidly triggered to enter the interaction mode through the input device, so that the speed of activating the interaction mode is improved; on the other hand, additional operation is not needed, so that the user can trigger the interaction mode naturally, and convenience is improved. Furthermore, when a plurality of interactive objects exist, the interactive objects can be accurately locked, and therefore the privacy of interaction between the virtual objects and the interactive objects is improved.
Next, the steps in the directional interaction method are further explained with reference to fig. 3 to 9.
In step S310, it is detected whether an interactive trigger operation is received, wherein the interactive trigger operation acts on an input device, and a relative position between the input device and a preset portion of a user meets a preset condition.
In this example embodiment, the user may be immersed in the virtual reality environment with an auxiliary device such as a head mounted display, VR glasses, or one or more displays mounted on one or more surfaces at a distance from the user. In a virtual environment, typically a first-person perspective of a user, all of the user's specific actions may be mapped onto a virtual object. For example, referring to fig. 4, the user-corresponding virtual objects 401, 402, and 403 are all interactive objects. The interaction triggering operation can enable the system to enter a directional voice state, and the condition that the relative position between the input device and the preset part of the user meets the preset condition can be that the included angle or the distance between the input device and the preset part of the user using the input device currently meets the preset condition. The input device in this example may be a gamepad, and the interactive trigger action is considered received when the gamepad meets the angle or distance requirement. After detecting the interaction triggering operation, it may be detected whether a recording triggering operation is received.
Specifically, in this example embodiment, the interaction triggering operation may include:
when the input device is a three-degree-of-freedom handle, the pitch angle of the input device is upward, and the included angle between the pitch angle of the input device and the preset position of the user meets a first preset range.
In the exemplary embodiment, when the input device is a three-degree-of-freedom handle 3dof (away from free), the direction of the pitch angle of the input device may be detected, and whether an included angle between the input device and a preset portion of a user meets a first preset range may be determined. Referring to fig. 6, the euler angles may include a pitch angle pitch, a yaw angle yaw, and a roll angle roll, and the three euler angles are combined to calculate the rotation of any object in space. The pitch angle may be set to rotate about the X-axis, the yaw angle may be set to rotate about the Y-axis, and the roll angle may be set to rotate about the Z-axis. For a 3Dof handle, the pitch angle describes the angle of yaw up and down, and the yaw angle represents the angle of yaw left and right, which is illustrated as pitch angle in this example.
In this example, it is possible to detect whether the pitch angle of the 3Dof handle changes or whether the 3Dof handle rotates around the pitch angle by a program. The recording of the change in angle is automatically started upon detection of a rotation of the 3Dof handle in the direction of the pitch angle. And when the pitch angle of the handle is upward and an included angle between the handle and a preset position of a user using the handle at present meets a first preset range, the interaction triggering operation can be considered to be received. Wherein, assuming that the initial position of the handle is horizontal, the upward pitch angle of the handle can be understood as rotating the handle vertically upward in the horizontal position. The preset part of the user can be the mouth of the user, the first preset range can be defined by a game developer through a program according to actual requirements, for example, the first preset range can be 15 degrees, and when the pitch angle of the handle is upward and the included angle between the handle and the vertical direction right in front of the mouth of the user is within the first preset range, for example, is smaller than or equal to 15 degrees, the receiving of the interactive triggering operation can be judged.
Specifically, in this example, the coordinates of the mouth of the user may be acquired by a coordinate system, which may be established with the mouth of the user as an origin, and with the vertical direction directly in front of the mouth as the positive direction of the vertical axis and the direction of the mouth as the positive direction of the horizontal axis. Specifically, assume that the coordinates of the user's mouth are (X)mp,Ymp,Zmp) The unit vector of the right front vertical direction of the mouth is (X)mf,Ymf,Zmf) The handle coordinate is (X)p,Yp,Zp) The unit vector of the handle angle is (X)r,Yr,Zr) The angle between the handle and the direction perpendicular to the front of the mouth can be determined from the coordinates. Referring to fig. 7, an initial position of the handle may be set to be perpendicular to a direction directly in front of the mouth, that is, the handle may be placed vertically. (it will be appreciated that the initial position of the handle may be arranged in other orientations as will be appreciated by those skilled in the art, for example parallel to the direction directly in front of the mouth, i.e. the handle may be positioned horizontally). The upward pitch angle of the handle may be understood as that the handle is rotated in a negative direction along a vertical axis, and when an included angle between the rotated handle and a vertical direction in front of the mouth, that is, an included angle between the handle and a positive direction of the longitudinal axis, is less than or equal to 15 °, the input device may be considered to satisfy the interactive trigger operation of bringing the system into a directional interactive state, as shown in fig. 7, B, and when the included angle between the handle and the positive direction of the longitudinal axis is equal to 15 °, the input device may be considered to receive the interactive trigger operation. The formula for judging the included angle between the handle and the positive direction of the longitudinal axis can be as follows:
Figure BDA0001402393770000081
the judgment and detection of the angle can be executed through a program, when a user rotates the handle, the program can detect the included angle between the rotated handle and the vertical direction in the front in real time, and when the included angle meets a first preset range, the operation interface is controlled to be automatically switched into a directional interaction state. When the included angle does not satisfy first preset scope, can detect the included angle always through the circulation, the user rotates the handle simultaneously, until the included angle satisfies first preset scope.
Further, in this example embodiment, the interaction triggering operation may further include:
and when the input equipment is a six-degree-of-freedom handle, the distance between the input equipment and the preset part of the user meets a second preset range.
In the present exemplary embodiment, when the input device is a six-degree-of-freedom handle 6dof (focus of freedom), it may be detected whether a distance between the input device and the user preset portion satisfies a second preset range. The six-degree-of-freedom handle can be used for simultaneously detecting angle information and position information. In this example, whether the position of the 6Dof handle changes or not can be detected by a program, and whether the position of the handle is close to the preset position of the user or not can be judged. When the handle is detected to move, the change of the position of the handle is automatically recorded, and when the distance between the current position of the handle and the preset position of the user meets a second preset range, the interaction triggering operation can be considered to be received. Wherein, the preset part of the user can still be the mouth of the user.
Specifically, assuming that the initial position of the handle is a horizontal position, the position of the handle close to the user's mouth may be understood as the user lifting the handle. The handle may be placed horizontally on a table or other object, or held horizontally in the hand of the user. The second preset range may be defined by the game developer through a program according to actual requirements, and may be 0 or a smaller value close to 0, for example.
In this example, again, assume the coordinates of the user's mouth as (X)mp,Ymp,Zmp) The vertical direction (unit vector) right in front of the mouth is (X)mf,Ymf,Zmf) The handle coordinate is (X)p,Yp,Zp) The angle (unit vector) of the handle is (X)r,Yr,Zr) The distance between the current position of the handle and the mouth of the user can be calculated according to the coordinates. When the distance between the current position of the handle and the mouth of the user is 0 or close to 0, it can be considered that an interaction trigger operation for bringing the system into the directional interaction state is received, so that the system enters the directional interaction state. The distance determination formula in the present exemplary embodiment may be:
Figure BDA0001402393770000082
the detection, calculation and judgment processes of the position and the distance can be executed through a program, when a user takes up or puts down the handle, the program can detect the current position of the handle in real time and calculate the distance between the current position of the handle and the mouth of the user. When the distance meets a second preset range, controlling the operation interface to be automatically switched into a directional interaction state; in thatWhen the distance does not satisfy the second preset range, the distance can be detected all the time through circulation, and meanwhile, the user adjusts the position of the handle until the distance between the current position of the handle and the mouth of the user satisfies the second preset range.
The two interactive triggering modes are not required to be completed through entity keys on the input equipment or virtual controls in the operation interface, so that the operation is effectively simplified, and the convenience is improved. It should be noted that, after the directional interaction state is triggered to enter through any of the above manners, the operation interface can be automatically switched to the operation interface in the directional interaction state, and the virtual object in the operation interface corresponds to the user operating the handle.
In step S320, after the interaction triggering operation is detected, if a recording triggering operation is received, triggering the user to record the voice message and determining whether recording of the voice message is finished.
In this exemplary embodiment, when any one of the interactive trigger operations described in step S310 is detected, it may further be detected whether a recording trigger operation is received. The recording trigger operation may enable a user to start recording a voice message, where the recording trigger operation may be that an input device meets a preset condition, or a recording trigger action is set in advance and detected by an auxiliary device, and the recording trigger operation may be completed by a virtual object, or may be completed by combining with an entity key or a button on the input device, for example, clicking or long-pressing a function key on a handle, which is not particularly limited in the exemplary embodiment. Wherein the recording trigger operation may be detected by a program. After the user starts to record the voice message, whether the recording of the voice message is finished or not can be detected and judged.
Specifically, in this example embodiment, the recording trigger operation may include:
and determining a target interactive gesture model in a plurality of interactive gesture models displayed on the operation interface according to the position of the visual center point of the virtual object.
In the present exemplary embodiment, the user can change the angle of view, and the scene within the visual field range, by an action of turning the head or the like. Therefore, the virtual object corresponding to the user can utilize the cursor or the light beam correspondingly displayed at the visual center point to perform target selection, and the interaction between the virtual object and a plurality of interactive objects in the virtual reality environment is realized.
In this example, when entering the directional interaction state, a plurality of interaction gesture models for the user to select may be loaded, and when determining the target interaction gesture model through the position where the visual center point of the virtual object stays, it may be understood that a recording trigger operation is received. The interactive gesture model may be understood as a directional speech gesture model, such as a shouting gesture model by hand in the mouth or other action model. Specifically, in a coordinate system of the virtual reality scene, when it is determined that a visual center point of the virtual object coincides with a coordinate of one of the plurality of interactive gesture models on the operation interface, or the visual center point falls within a certain error range of the coordinate of one interactive gesture model, the target interactive gesture model may be determined according to the position of the visual center point of the virtual object. In other exemplary embodiments of the present disclosure, the determination may also be made by determining a dwell time of the visual center point of the virtual object at a certain position of the interactive gesture model, which also belongs to the protection scope of the present disclosure.
In this example, the position of the visual center point of the virtual object may be detected by a ray, and the selected target interaction gesture model may be highlighted. For example, referring to fig. 5, the target interaction gesture model in this example may be a gesture of shouting a virtual object hand corresponding to the user in the operation interface, so that the user can clearly distinguish and recognize a recording state, thereby avoiding causing a misoperation. Further, the target interaction gesture model can be determined as the left hand or the right hand of the virtual object being lifted and placed at the mouth to take a shouting gesture according to different handles. It should be noted that no action of the user, such as nodding the head, shaking the head, blinking, etc., will affect the interactive gesture model. When receiving the recording trigger operation, the microphone function can be opened to enable the user to start recording, and simultaneously the mouth of the virtual object also correspondingly performs the recording action, and in the process, all the speaking contents of the user are recorded.
In this exemplary embodiment, when the input device is a three-degree-of-freedom handle, determining whether the recording of the voice message is finished may include:
and when detecting that the included angle between the input equipment and the preset part of the user exceeds the first preset range, judging that the recording of the voice message is finished.
In this exemplary embodiment, the recording may be ended in a corresponding manner according to different trigger manners of different input devices. When the input device is a three-degree-of-freedom handle, whether recording is finished or not can be judged by detecting whether an angle range between the input device and the vertical direction right in front of the mouth of the user exceeds a first preset range or not. Specifically, when the angle range between the input device and the vertical direction right in front of the user's mouth exceeds a first preset range, it may be judged that the recording is finished. The current angle between the vertical direction directly in front of the user's mouth and the input device may be determined based on the current direction of the input device. The current angle may be the same as or different from the first preset range direction. For example, the first preset range is set to 15 degrees, and when the current angle is the same as the set first preset range, the angle between the input device and the vertical direction right in front of the mouth of the user exceeds 15 degrees, for example, 16 degrees or 35 degrees as shown in fig. C in fig. 7, the recording may be ended. It is also understood that the user may put the handle down, for example, by placing the handle on a table. When the directions are different, for example, the input device is rotated to any direction, and the input device deviates from the angle range of the first preset range of the recording state, the recording can also be ended.
In addition, when the input device is a six-degree-of-freedom handle, determining whether the recording of the voice message is finished may include:
and when the distance between the input equipment and the preset part of the user is detected to exceed the second preset range, judging that the recording of the voice message is finished.
In this exemplary embodiment, when the input device is a six-degree-of-freedom handle, it may be determined that recording of the voice message is finished by detecting whether the distance between the input device and the user preset portion exceeds the second preset range. The recording may be considered to be over when the distance between the current position of the handle and the user's mouth exceeds a second preset range, e.g. much greater than 0. Likewise, the input device may be moved in any direction as long as the distance between the current position of the handle and the user's mouth is outside the second preset range. Referring to fig. 8, the distance between the current position of the handle and the user's mouth in fig. a is close to zero, and the distance between the current position of the handle and the user's mouth in fig. B is greater than zero, so that the recording can be considered to be finished for fig. B.
In addition, in this exemplary embodiment, the determining that the recording of the voice message is finished may further include:
and detecting whether a preset button on the input equipment receives a preset operation or not, and judging that the recording of the voice message is finished when the preset button does not receive the preset operation.
In the exemplary embodiment, when recording is performed in combination with the physical key on the input device, it may be determined that recording of the voice message is finished by detecting whether a preset button on the handle receives a preset operation. The preset button here may be a function key, and the preset operation may be understood as a pressing operation acting on the function key. In the case where no pressing operation on the function key is detected, it is also understood that the user releases the function key on the handle. When the pressing operation on the function key is not detected or the handle function key is released, it can be judged that the recording of the voice message is finished. In this example, the degree of pressing force acting on the function key of the handle may be detected by a program, and when the degree of pressing force is zero or a preset value, it may be determined that the user has released the function key of the handle, so as to determine that recording of the voice message is finished, and perform a subsequent operation.
It should be noted that, after it is determined that recording of the voice message is completed in any manner, in the virtual reality scene presented by the corresponding operation interface, the gesture of the virtual object in the recording state for shouting at the mouth may be correspondingly adjusted to another gesture, which is not particularly limited in this exemplary embodiment.
In step S330, after it is determined that the recording of the voice message is finished, a target interactive object is determined and the voice message is sent to the target interactive object.
In this exemplary embodiment, after it is determined that the recording of the voice message is finished, a target interactive object may be determined, and the recorded voice message may be sent to the determined target interactive object, so as to enter a directional interaction phase, and complete an interaction process between the virtual object and one of the interactive objects. Only one interactive object can be selected at a time, and only voice messages sent between the virtual object and the target interactive object are loaded, so that other irrelevant messages are reduced, and the message transmission efficiency is improved.
Specifically, in this exemplary embodiment, determining a target interaction object and sending the voice message to the target interaction object may include:
judging whether an angle between the current orientation of the virtual object and the current position of the interactive object is larger than a third preset value or not;
and when the angle is smaller than the third preset value, determining the interactive object as a target interactive object and sending the voice message to the target interactive object.
In the present exemplary embodiment, the target selection may be performed by determining whether a distance or an angle between the virtual object and the interactive object satisfies a preset condition. For example, it may be determined whether an angle between the current orientation of the virtual object and the position of the interactive object is greater than a third preset value. The third preset value here may be set by the game developer according to actual requirements, and may be, for example, 15 ° in this example.
In this exemplary embodiment, when the angle between the current orientation of the virtual object and the position of the interactive object is smaller than a third preset value, the virtual object may select a target object within the angle range, and the system sends the voice message recorded by the user to the target interactive object. Here, the virtual object may determine a target interactive object to be shout in advance, and adjust the current orientation of the virtual object all the time, so that an angle between the current orientation of the virtual object and the position of the selected target interactive object is smaller than a third preset value; and when the object to be interacted is unknown, the target interaction object to be shout is determined directly through the fact that the angle between the current orientation of the target interaction object and the position of each interaction object is smaller than a third preset value.
Specifically, in the present example, the coordinates of the mouth of the assumed user may be obtained as (X) from the coordinate systemmp,Ymp,Zmp) The unit vector in the vertical direction right in front of the mouth is (X)mf,Ymf,Zmf) Determining the coordinate of the interactive object to be shout as (X)p,Yp,Zp) First, a distance vector (X) between the virtual object and the interactive object corresponding to the user is calculatedt,Yt,Zt)=(Xp-Xmp,Yp-Ymp,Zp-Zmp) Then, the angle determination formula between the current orientation of the virtual object and the position of the interactive object may be:
Figure BDA0001402393770000131
when the angle between the current orientation of the virtual object and the position of the interactive object is less than 15 degrees, the interactive object corresponding to the current orientation can be determined as the target interactive object, and the virtual object is enabled to send the voice message to the target interactive object.
In the process, after the target interactive object is determined to receive the voice message sent by the virtual object, only the voice message between the target interactive object and the virtual object can be loaded, so that the condition that the voice sent by all players is loaded in the prior art is avoided, the condition that the operation is finished through an entity key or a virtual control is avoided, the operation is effectively simplified, the effective screening of the voice message is realized, the communication is faster, the interactive performance is improved, and the privacy of the communication between the target interactive object and the virtual object is improved by the mode. In addition, by judging the angle between the current orientation of the virtual object and the position of the interactive object, the directional interaction among a plurality of objects can be accurately finished.
In addition, in this example embodiment, after the target interaction object receives the voice message, the method may further include:
and storing the voice message sent by the virtual object to the target interactive object, and adding the identifier corresponding to the virtual object to the interactive list of the target interactive object.
In the exemplary embodiment, after the target interaction object receives the voice message, the voice message sent by the virtual object to the target interaction object may be stored in the system, and the identifier corresponding to the virtual object may be added to the interaction list of the target interaction object. The identifier corresponding to the virtual object may be, for example, a head portrait or a nickname corresponding to the virtual object. The target interactive object can select the corresponding virtual object which sends the voice to the target interactive object in the interactive list so as to listen to the previous historical message again at any time without requiring the virtual object to send again, thereby facilitating the interactive process.
In order to facilitate the user to determine the interactive object for sending the voice message to the virtual character, in this exemplary embodiment, a prompt identifier may be further set for the target interactive object for receiving the voice message sent by the virtual object. The prompt identification may display content such as the duration of the voice message, whether the voice message has been read, etc. When the virtual object sends a voice message to the target interactive object, a prompt identifier can also be set.
In an exemplary embodiment of the present disclosure, there is also provided a directional interaction device, as shown in fig. 10, the device 1000 may include:
the interactive triggering module 1001 may be configured to detect whether an interactive triggering operation that acts on an input device and causes a relative position between the input device and a preset portion of a user to satisfy a preset condition is received;
the recording module 1002 may be configured to, after detecting the interaction triggering operation, trigger the user to record the voice message and determine whether recording of the voice message is finished if a recording triggering operation is received;
the interaction module 1003 may be configured to determine a target interaction object and send the voice message to the target interaction object after it is determined that the recording of the voice message is finished.
The specific details of each module in the directional interaction device have been described in detail in the corresponding directional interaction method, and therefore are not described herein again.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1100 according to this embodiment of the invention is described below with reference to fig. 11. The electronic device 1100 shown in fig. 11 is only an example and should not bring any limitations to the function and the scope of use of the embodiments of the present invention.
As shown in fig. 11, electronic device 1100 is embodied in the form of a general purpose computing device. The components of the electronic device 1100 may include, but are not limited to: the at least one processing unit 1110, the at least one memory unit 1120, a bus 1130 connecting different system components (including the memory unit 1120 and the processing unit 1110), and a display unit 1140.
Wherein the storage unit stores program code that is executable by the processing unit 1110 to cause the processing unit 1110 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 1110 may perform the steps as shown in fig. 3.
The storage unit 1120 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)11201 and/or a cache memory unit 11202, and may further include a read only memory unit (ROM) 11203.
Storage unit 1120 may also include a program/utility 11204 having a set (at least one) of program modules 11205, such program modules 11205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1130 may be representative of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1100 may also communicate with one or more external devices 1170 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1100, and/or any devices (e.g., router, modem, etc.) that enable the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 1150. Also, the electronic device 1100 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1160. As shown, the network adapter 1160 communicates with the other modules of the electronic device 1100 over the bus 1130. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1100, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 12, a program product 1200 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A directional interaction method is applied to a terminal which can present a virtual reality scene and at least comprises a virtual object and an operation interface of an interactive object, and is characterized by comprising the following steps:
detecting whether an interactive trigger operation which acts on an input device and enables the relative position between the input device and a preset part of a user to meet a preset condition is received;
after the interaction triggering operation is detected, if a recording triggering operation is received, triggering the user to record the voice message and judging whether the recording of the voice message is finished, wherein the recording triggering operation comprises the following steps: determining a target interactive gesture model in a plurality of interactive gesture models displayed on the operation interface according to the position where the visual center point of the virtual object stays;
after the voice message recording is judged to be finished, determining a target interactive object and sending the voice message to the target interactive object;
wherein the interactive triggering operation comprises: when the input equipment is a three-degree-of-freedom handle, the pitch angle of the input equipment is upward, and the included angle between the pitch angle of the input equipment and the preset position of the user meets a first preset range; and when the input equipment is a six-degree-of-freedom handle, the distance between the input equipment and the preset part of the user meets a second preset range.
2. The directional interaction method of claim 1, wherein when the input device is a three-degree-of-freedom handle, determining whether the voice message has been recorded comprises:
and when detecting that the included angle between the input equipment and the preset part of the user exceeds the first preset range, judging that the recording of the voice message is finished.
3. The directional interaction method of claim 1, wherein when the input device is a six degree-of-freedom handle, determining whether the voice message has been recorded comprises:
and when the fact that the distance between the input equipment and the preset part of the user exceeds the second preset range is detected, judging that the recording of the voice message is finished.
4. The method of claim 1, wherein determining whether the recording of the voice message is complete further comprises:
and detecting whether a preset button on the input equipment receives a preset operation or not, and judging that the recording of the voice message is finished when the preset button does not receive the preset operation.
5. The method of claim 1, wherein determining a target interaction object and sending the voice message to the target interaction object comprises:
judging whether an angle between the current orientation of the virtual object and the current position of the interactive object is larger than a third preset value or not;
and when the angle is smaller than the third preset value, determining the interactive object as a target interactive object and sending the voice message to the target interactive object.
6. A directional interaction method according to claim 1, characterized in that said method further comprises:
and storing the voice message sent by the virtual object to the target interactive object, and adding the identifier corresponding to the virtual object to the interactive list of the target interactive object.
7. A directional interaction method according to any one of claims 1 to 6, characterized in that the method further comprises:
and setting a prompt identifier for the target interactive object for sending the voice message by the virtual object.
8. A directional interaction device is applied to a terminal which can present a virtual reality scene and at least comprises a virtual object and an operation interface of an interaction object, and is characterized by comprising:
the interactive triggering module is used for detecting whether an interactive triggering operation which acts on an input device and meets a preset condition on the relative position between the input device and a preset part of a user is received;
and the recording module is used for triggering the user to record the voice message and judging whether the voice message is recorded and finished or not if the recording triggering operation is received after the interaction triggering operation is detected, wherein the recording triggering operation comprises the following steps: determining a target interactive gesture model in a plurality of interactive gesture models displayed on the operation interface according to the position where the visual center point of the virtual object stays;
the interaction module is used for determining a target interaction object and sending the voice message to the target interaction object after judging that the voice message recording is finished;
wherein the interactive triggering operation comprises: when the input equipment is a three-degree-of-freedom handle, the pitch angle of the input equipment is upward, and the included angle between the pitch angle of the input equipment and the preset position of the user meets a first preset range; and when the input equipment is a six-degree-of-freedom handle, the distance between the input equipment and the preset part of the user meets a second preset range.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the directional interaction method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the directional interaction method of any one of claims 1-7 via execution of the executable instructions.
CN201710804588.XA 2017-09-08 2017-09-08 Directional interaction method and device, electronic equipment and storage medium Active CN107562201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710804588.XA CN107562201B (en) 2017-09-08 2017-09-08 Directional interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710804588.XA CN107562201B (en) 2017-09-08 2017-09-08 Directional interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107562201A CN107562201A (en) 2018-01-09
CN107562201B true CN107562201B (en) 2020-07-07

Family

ID=60980186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710804588.XA Active CN107562201B (en) 2017-09-08 2017-09-08 Directional interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107562201B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110431513B (en) * 2018-01-25 2020-11-27 腾讯科技(深圳)有限公司 Media content transmitting method, device and storage medium
CN108724203A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 A kind of exchange method and device
CN108671539A (en) * 2018-05-04 2018-10-19 网易(杭州)网络有限公司 Target object exchange method and device, electronic equipment, storage medium
CN108744529A (en) * 2018-05-24 2018-11-06 网易(杭州)网络有限公司 Point-to-point Communication Method in game and equipment
CN108833367A (en) * 2018-05-25 2018-11-16 链家网(北京)科技有限公司 Transmission of speech information method and device in virtual reality scenario
CN109460148A (en) * 2018-10-24 2019-03-12 北京实境智慧科技有限公司 A kind of VR voice interactive system and its exchange method
CN111475022A (en) * 2020-04-03 2020-07-31 上海唯二网络科技有限公司 Method for processing interactive voice data in multi-person VR scene
CN111736689A (en) * 2020-05-25 2020-10-02 苏州端云创新科技有限公司 Virtual reality device, data processing method, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975061A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Control method and apparatus for virtual reality scene as well as handle
CN105975057A (en) * 2016-04-25 2016-09-28 乐视控股(北京)有限公司 Multi-interface interaction method and device
CN106686255A (en) * 2017-03-01 2017-05-17 广东小天才科技有限公司 Mobile terminal and method for sending voice message
CN106774830A (en) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 Virtual reality system, voice interactive method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102090755B1 (en) * 2013-07-02 2020-03-19 삼성전자주식회사 Method for controlling function and an electronic device thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975057A (en) * 2016-04-25 2016-09-28 乐视控股(北京)有限公司 Multi-interface interaction method and device
CN105975061A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Control method and apparatus for virtual reality scene as well as handle
CN106774830A (en) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 Virtual reality system, voice interactive method and device
CN106686255A (en) * 2017-03-01 2017-05-17 广东小天才科技有限公司 Mobile terminal and method for sending voice message

Also Published As

Publication number Publication date
CN107562201A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107562201B (en) Directional interaction method and device, electronic equipment and storage medium
CN107913520B (en) Information processing method, information processing device, electronic equipment and storage medium
US10765947B2 (en) Visual display method for compensating sound information, computer readable storage medium and electronic device
US10434418B2 (en) Navigation and interaction controls for three-dimensional environments
US10807002B2 (en) Visual method and apparatus for compensating sound information, storage medium and electronic device
CN108465238B (en) Information processing method in game, electronic device and storage medium
WO2017054453A1 (en) Information processing method, terminal and computer storage medium
US7427980B1 (en) Game controller spatial detection
KR20210132175A (en) Method for controlling virtual objects, and related apparatus
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
CN108037888B (en) Skill control method, skill control device, electronic equipment and storage medium
CN110090444B (en) Game behavior record creating method and device, storage medium and electronic equipment
CN107329690B (en) Virtual object control method and device, storage medium and electronic equipment
CN108355347B (en) Interaction control method and device, electronic equipment and storage medium
CN109876439A (en) Game picture display methods and device, storage medium, electronic equipment
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
US10311715B2 (en) Smart device mirroring
CN108355352B (en) Virtual object control method and device, electronic device and storage medium
CN116459506A (en) Game object selection method and device
TW201901362A (en) Method and device for inputting password in virtual reality scene
WO2017177436A1 (en) Method and apparatus for locking object in list, and terminal device
US20170168582A1 (en) Click response processing method, electronic device and system for motion sensing control
CN110075534B (en) Real-time voice method and device, storage medium and electronic equipment
CN110215686B (en) Display control method and device in game scene, storage medium and electronic equipment
KR20210008423A (en) Application partition processing method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant