CN106774830B - Virtual reality system, voice interaction method and device - Google Patents
Virtual reality system, voice interaction method and device Download PDFInfo
- Publication number
- CN106774830B CN106774830B CN201611027164.9A CN201611027164A CN106774830B CN 106774830 B CN106774830 B CN 106774830B CN 201611027164 A CN201611027164 A CN 201611027164A CN 106774830 B CN106774830 B CN 106774830B
- Authority
- CN
- China
- Prior art keywords
- virtual character
- voice
- voice message
- virtual
- interactive object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The disclosure relates to a virtual reality system, a voice interaction method and a voice interaction device. The voice interaction method comprises the following steps: detecting whether a sight center point of a virtual character in a virtual reality scene falls on an interactive object which sends a voice message to the virtual character; when the sight center point of the virtual character is judged to be located on an interactive object which sends a voice message to the virtual character, the interactive object is used as an attention target, and other interactive objects which send the voice message to the virtual character in the visual field range of the virtual character are used as non-attention targets; and increasing the playing volume of the voice message sent to the virtual character by the concerned target, and reducing the playing volume of the voice message sent to the virtual character by the non-concerned target. The present disclosure can improve the sense of immersion of the user.
Description
Technical Field
The present disclosure relates to the field of virtual reality technologies, and in particular, to a voice interaction method, a voice interaction apparatus, and a virtual reality system including the voice interaction apparatus.
Background
Vr (virtual reality), that is, virtual reality, specifically, a technology for providing an immersive sensation in an interactive three-dimensional environment generated on a computer by comprehensively using a computer graphics system and various interface devices such as reality and control. The current virtual reality technology is widely popularized and applied in games, and is related to game types such as shooting, puzzle solving, role playing and the like. In addition, a voice system may be embedded in the virtual reality game application, and the voice system can receive and send voice messages or perform real-time voice, so that a user can conveniently perform social contact or assist in game process.
In the voice system, a general transmission method of the voice message includes: and the user touches and presses the entity function key or the virtual control to record, and after the entity function key or the virtual control is released, the voice message is sent or the sending of the current voice message is cancelled. The sending mode is complicated and inconvenient to operate. And the general playing mode of the voice message comprises: when a user receives the voice message, the currently received voice message is clicked through the entity function key or the virtual control, and the voice message is played. Therefore, the user needs to click each received voice message, and the operation is complex and not simple. Alternatively, the user may set in advance to automatically play the voice message. When a user sets the voice message playing to be automatic playing, the filtering of the voice message cannot be realized, and a certain voice message cannot be automatically and repeatedly played; and when a large number of voice messages are received, there may be a case where the play of the voice messages is delayed or the progress of the game is disturbed.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a voice interaction method, a voice interaction apparatus, and a virtual reality system including the same, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a voice interaction method, including:
detecting whether a sight center point of a virtual character in a virtual reality scene falls on an interactive object which sends a voice message to the virtual character;
when the sight center point of the virtual character is judged to be located on an interactive object which sends a voice message to the virtual character, the interactive object is used as an attention target, and other interactive objects which send the voice message to the virtual character in the visual field range of the virtual character are used as non-attention targets;
and increasing the playing volume of the voice message sent to the virtual character by the concerned target, and reducing the playing volume of the voice message sent to the virtual character by the non-concerned target.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
judging whether an interactive object for sending a voice message to the virtual character exists in the visual field range of the virtual character;
and when judging that an interactive object for sending the voice message to the virtual character exists in the visual field range of the virtual character, automatically playing the voice message sent to the virtual character by the interactive object.
In an exemplary embodiment of the present disclosure, reducing the play volume of the voice message sent by the non-attention target to the virtual character includes:
acquiring coordinates of the attention target and a non-attention target in the virtual reality scene and calculating the distance between the non-attention target and the attention target according to the coordinates;
and obtaining a volume attenuation coefficient according to the distance between the non-attention target and the attention target, and reducing the playing volume of the voice message sent to the virtual role by the non-attention target according to the volume attenuation coefficient.
In an exemplary embodiment of the present disclosure, the volume attenuation coefficient is calculated according to the following formula:
K=a*S*S+b
wherein K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is greater than 0, and b is a constant and b is greater than or equal to 0.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
and when the sight center point of the virtual character is judged not to fall on the interactive object for sending the voice message to the virtual character, adjusting the playing volume of the voice message sent to the virtual character by each interactive object according to the distance between each interactive object for sending the voice message to the virtual character in the visual field range of the virtual character and the virtual character.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
after the voice sent to the virtual character by the attention target is played, detecting whether the sight center point of the virtual character still falls on the attention target;
and when the fact that the sight center point of the virtual character still falls on the attention target is detected, the voice message sent to the virtual character by the attention target is replayed.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
and setting a prompt identifier for each interactive object sending the voice message to the virtual character within the visual field range of the virtual character.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
detecting whether a recording trigger operation is received or not, and entering a recording preparation mode when the recording trigger operation is detected to be received;
detecting whether a sight center point of the virtual character falls on the interactive object or not in the recording preparation mode;
and when the sight center point of the virtual character is detected to fall on one interactive object, taking the interactive object as a voice receiver and starting to record a voice message to be sent.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
after the voice message to be sent is recorded, detecting whether the sight center point of the virtual character still falls on the voice receiver;
and when the fact that the sight center point of the virtual character still falls on the voice receiver is detected, the voice message to be sent is sent to the voice receiver.
In an exemplary embodiment of the present disclosure, the voice interaction method further includes:
and before the voice message to be sent is recorded, keeping the voice receiver positioned in the visual field range of the virtual role.
According to a second aspect of the present disclosure, there is provided a voice interaction apparatus, comprising:
the system comprises a first sight line detection module, a second sight line detection module and a third sight line detection module, wherein the first sight line detection module is used for detecting whether a sight line central point of a virtual character falls on an interactive object which sends a voice message to the virtual character in a virtual display scene;
the target setting module is used for taking an interactive object as a concerned target when detecting that the sight center point of the virtual character falls on the interactive object which sends the voice message to the virtual character; taking other interactive objects which send voice messages to the virtual character within the visual field range of the virtual character as non-attention targets;
and the volume adjusting module is used for increasing the playing volume of the voice message sent to the virtual role by the attention target and reducing the playing volume of the voice message sent to the virtual role by the non-attention target.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
the information receiving module is used for receiving voice messages sent to the virtual character by each interactive object in the visual field range of the virtual character;
and the information playing control module is used for playing the voice message sent to the virtual character by the interactive object when judging that the interactive object for sending the voice message to the virtual character exists in the visual field range of the virtual character.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
a first distance acquisition unit, configured to acquire coordinates of the attention target and a non-attention target in the virtual reality scene and calculate a distance between the non-attention target and the attention target according to the coordinates;
and the volume attenuation control unit is used for obtaining a volume attenuation coefficient according to the distance between the non-attention target and the attention target, and reducing the playing volume of the voice message sent to the virtual character by the non-attention target according to the volume attenuation coefficient.
In an exemplary embodiment of the present disclosure, the volume attenuation coefficient is calculated according to the following formula:
K=a*S*S+b
and K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is more than 0, and b is a constant and b is more than or equal to 0.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
and the second distance acquisition module is used for acquiring the distance between each interactive object for sending the voice message to the virtual character within the visual field range of the virtual character and the virtual character when the sight center point of the virtual character is judged not to fall on the interactive object for sending the voice message to the virtual character, so as to adjust the playing volume of the voice message sent to the virtual character by each interactive object.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
the second sight line detection module is used for detecting whether the sight line central point of the virtual character still falls on the attention target or not after the voice sent by the attention target to the virtual character is played;
and the circulating playing control module is used for replaying the voice message sent to the virtual role by the attention target when the fact that the sight center point of the virtual role still falls on the attention target is detected.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
and the identification setting module is used for setting a prompt identification for each interactive object which sends the voice message to the virtual character within the visual field range of the virtual character.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
the recording trigger module is used for detecting whether a recording trigger operation is received or not and entering a recording preparation mode when the recording trigger operation is detected to be received;
the third sight line detection module is used for detecting whether a sight line central point of the virtual character falls on the interactive object or not in the recording preparation mode;
and the recording module is used for taking the interactive object as a voice receiver and starting recording the voice message to be sent when the fact that the sight center point of the virtual character falls on the interactive object is detected.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
the fourth sight line detection module is used for detecting whether the sight line central point of the virtual role still falls on the voice receiver after the voice message to be sent is recorded;
and the information sending module is used for sending the voice message to be sent to the voice receiver when the fact that the sight center point of the virtual role still falls on the voice receiver is detected.
In an exemplary embodiment of the present disclosure, the voice interaction apparatus further includes:
and the position control module is used for keeping the voice receiver positioned in the visual field range of the virtual role before the voice message to be sent is recorded.
According to a third aspect of the present disclosure, a virtual reality system is provided, which includes the voice interaction apparatus.
In the voice interaction method provided by the embodiment of the disclosure, an interaction object which sends a voice message to a virtual character and is located at a sight center point of the virtual character is used as an attention target, other interaction objects which send the voice message to the virtual character in a visual field range are used as non-attention targets, the playing volume of the voice message sent by the attention target is increased, and the playing volume of the voice message sent by the non-attention target is reduced; the playing volume of the voice message is automatically adjusted according to the sight center point of the virtual role controlled by the user, and the traditional control entity function key or virtual control is not needed, so that the operation is effectively simplified. Meanwhile, on one hand, the playing volume of the voice message of the non-concerned target is reduced, so that the voice message which is not concerned by the user can be effectively filtered; on the other hand, the visual focus of the user and the voice message playing volume control are effectively combined, the real chat scene is simulated more truly, the immersion sense of the user in the game is improved, and the user experience can be greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow chart of a method of voice interaction in an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a scene in which a center point of a virtual character's line of sight falls on an interactive object in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a scenario in which a center point of a virtual character's gaze does not fall on an interactive object in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a scenario where a virtual character triggers a recording function in an exemplary embodiment of the disclosure
FIG. 5 schematically illustrates a simulated block diagram of a voice interaction device in an exemplary embodiment of the present disclosure;
fig. 6 schematically illustrates a simulated block diagram of a volume adjustment device of a voice interaction device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The example embodiment first provides a voice interaction method, which may be applied to a virtual reality system, where the virtual reality system may be composed of an optical structure and a display system, where the display system is connected to an external virtual reality engine to receive display content processed by the external virtual reality engine, and then presents a virtual reality scene for a user through the optical structure; or only comprises an optical structure, and the display system and the virtual reality engine are provided by external equipment such as a smart phone; that is, the virtual reality system to which the virtual reality interaction method is applied in the present exemplary embodiment is not particularly limited. For example, the voice interaction method can be applied to shooting, puzzle solving, role playing or social contact game applications in a virtual reality system. Referring to fig. 1, the voice interaction method may include the steps of:
s1, detecting whether a sight center point of a virtual character in a virtual reality scene falls on an interactive object which sends a voice message to the virtual character;
s2, when the sight center point of the virtual character is judged to be located on an interactive object for sending the voice message to the virtual character, the interactive object is used as an attention target, and other interactive objects for sending the voice message to the virtual character in the visual field range of the virtual character are used as non-attention targets; and
and S3, increasing the playing volume of the voice message sent to the virtual character by the concerned target, and reducing the playing volume of the voice message sent to the virtual character by the non-concerned target.
In the voice interaction method provided by the present exemplary embodiment, the playing volume of the voice message is automatically adjusted according to the sight center point of the virtual character controlled by the user, and the traditional control entity function key or virtual control does not need to be used, so the operation is effectively simplified. Meanwhile, on one hand, the playing volume of the voice message of the non-concerned target is reduced, so that the voice message which is not concerned by the user can be effectively filtered; on the other hand, the visual focus of the user and the voice message playing volume control are effectively combined, the real chat scene is simulated more truly, the immersion sense of the user in the game is improved, and therefore the user experience can be greatly improved.
Hereinafter, each step of the voice interaction method in the present exemplary embodiment will be described in more detail with reference to fig. 2 to 5.
In step S1, it is detected whether a center point of a line of sight of a virtual character in a virtual reality scene falls on an interactive object that transmits a voice message to the virtual character.
In this example embodiment, the user may be immersed in the virtual reality environment with an auxiliary device such as a head mounted display, VR glasses, or one or more displays mounted on one or more surfaces at a distance from the user. The user can change the visual angle and the scene in the visual field range by turning the head and the like in the virtual reality environment. In a virtual environment, usually, the first-person perspective of a user, specific operations of the user can be mapped on a virtual character, and the virtual character can select a target by using a cursor or a light beam correspondingly displayed at a sight center point, so that interaction between the virtual character and an interactive object in a virtual reality environment is realized.
In a coordinate system of a virtual reality environment, when the sight line central point of the virtual character is judged to be coincident with the coordinate of an interactive object in a visual field range, or the sight line central point is judged to be in a certain range of the coordinate of the interactive object, the sight line central point of the virtual character is judged to be on the interactive object. In other exemplary embodiments of the present disclosure, the determination may also be performed in combination with the stay time period to avoid the erroneous determination, which also belongs to the protection scope of the present disclosure.
In order to facilitate the user to determine the interactive object for sending the voice message to the virtual character, in this exemplary embodiment, a prompt identifier may be further set for each interactive object for sending the voice message to the virtual character within the visual field of the virtual character. Referring to the scenes shown in fig. 2 and fig. 3, within the visual field of the virtual character, for the interactive objects sending voice messages to the virtual character, such as the interactive objects player a, player B, etc., in the figure, a voice message prompt identifier is provided, which can play a role of reminding the user. The prompt identifier may display, for example: the duration of the voice message, whether the information has been read, etc.
In step S2, when it is determined that the center point of the line of sight of the virtual character falls on an interactive object that transmits a voice message to the virtual character, the interactive object is set as an attention target and other interactive objects that transmit a voice message to the virtual character within the visual field of the virtual character are set as non-attention targets.
For example, referring to the scenario shown in fig. 2, a plurality of interactive objects exist within the virtual character view field in the virtual reality environment, and each interactive object sends a voice message to the virtual character. At the moment, the cursor at the center point of the visual line of the virtual character falls on an interactive object player A, and the player A can be taken as an attention target at the moment; at the same time, players B and C within the virtual character view range are set as non-attention targets.
In addition, in this exemplary embodiment, the voice interaction method may further include the following steps: firstly, judging whether an interactive object for sending a voice message to the virtual character exists in the visual field range of the virtual character; and secondly, when judging that an interactive object for sending the voice message to the virtual character exists in the visual field range of the virtual character, automatically playing the voice message sent to the virtual character by the interactive object.
For example, referring to the scenarios shown in fig. 2 and fig. 3, a plurality of interactive objects exist in the current view of the virtual character, and when it is detected that an interactive object sends a voice message to the virtual character, for example, player a and player B in the figures send voice messages to the virtual character respectively, the control automatically plays the voice messages of the interactive objects. By controlling the automatic playing of the voice message sent by the interactive object in the visual field range of the virtual character, a user can receive and play the voice message without operating an entity function key or a virtual control, thereby simplifying the operation in the game process; meanwhile, the real-time chat system more accords with an instant communication mode of people in a real environment, and can simulate a real chat scene more truly. However, this may not allow filtering of voice messages, and when a large number of voice messages are received, there may be a case where the play of the voice messages is delayed or the progress of the game is disturbed.
Based on the above, in step S3, the playback volume of the voice message sent by the attention target to the virtual character is increased, and the playback volume of the voice message sent by the non-attention target to the virtual character is decreased.
For example, the user may set the volume of playing the voice message of the attention target to a certain value in advance, or set the volume of playing the voice message of the attention target to the maximum value of the volume of playing all the voice messages in the current view of the virtual character; meanwhile, the user can also set the playing volume of the voice message of the non-concerned target to be a certain numerical value in advance, or set the playing volume of the voice message of the non-concerned target to be the minimum value of all sounds in the current visual field of the virtual character. And executing the playing volume of the currently set voice message after judging the attention target and the non-attention target in the visual field range of the virtual character.
Specifically, referring to the scenario shown in fig. 2, a plurality of interactive objects exist in the current visual field of the virtual character, it is currently determined that the player a is the attention target, and the interactive objects such as the player B and the player C are the non-attention targets, at this time, the play volume of the voice message sent to the virtual character by the player a is automatically increased to a predetermined volume value, and the play volume of the voice message sent to the virtual character by the non-attention targets such as the player B is reduced to a predetermined value. Therefore, the user can select the voice message sent by the interactive object and filter the voice message; meanwhile, the communication habit of the real chat scene of people is met, the immersion in the virtual reality environment is deepened, and the user experience is improved.
In this exemplary embodiment, the above-mentioned reducing the playing volume of the voice message sent by the non-attention target to the virtual character may include:
acquiring coordinates of the attention target and a non-attention target in the virtual reality scene and calculating a distance between the non-attention target and the attention target (namely a distance between the non-attention target and a sight line central point of the virtual character) according to the coordinates; and obtaining a volume attenuation coefficient according to the distance between the non-attention target and the attention target, and reducing the playing volume of the voice message sent to the virtual role by the non-attention target according to the volume attenuation coefficient.
In this exemplary embodiment, the volume attenuation coefficient may be calculated according to the following formula:
K=a*S*S+b
wherein K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is greater than 0, and b is a constant and b is greater than or equal to 0.
For example, if a is 3 and b is 20, K is 3S +20, i.e., the speech distance of 1 unit is reduced by 23 db from the maximum speech.
In addition, in order to optimize the voice playing in other situations, in other exemplary embodiments of the present disclosure, the voice interaction method described above may further include the following steps:
and S4, when the sight center point of the virtual character is judged not to fall on the interactive object for sending the voice message to the virtual character, adjusting the playing volume of the voice message sent to the virtual character by each interactive object according to the distance between each interactive object for sending the voice message to the virtual character in the visual field range of the virtual character and the virtual character.
Referring to the scenario shown in fig. 3, a plurality of interactive objects such as player a, player B, etc. to which a voice message is sent exist in the current visual field of the virtual character, but at this time, the center point of the line of sight of the virtual character does not fall on one of the interactive objects, and when the virtual character is in the sky, the earth, or other objects in the current visual field, that is, only a non-concerned object exists in the current visual field, but no concerned object exists, the playing volume of the voice message is adjusted according to the distance between the virtual character and the non-concerned object. For example, in the coordinate system of the virtual reality system, the distance between the coordinates of the virtual character and the coordinates of player a is smaller than the distance between the coordinates of the virtual character and the coordinates of player B, the play volume of the voice message of player a can be controlled to be larger than the play volume of the voice message of player B.
In other exemplary embodiments of the present disclosure, the voice interaction method described above may further include:
s5, after the voice sent to the virtual character by the attention target is played, detecting whether the sight center point of the virtual character still falls on the attention target; and when the fact that the sight center point of the virtual character still falls on the attention target is detected, the voice message sent to the virtual character by the attention target is replayed.
More specifically, referring to the scenario shown in fig. 2, when the attention point of the line of sight of the virtual character falls on the player a, the player a is used as the attention target, and other interactive objects in the visual field range of the virtual character are used as non-attention targets, at this time, the playing volume of the voice message of the attention target player a is increased to a preset value, and the playing volume of the voice message of the non-attention target in the visual field range of the virtual character is reduced to the preset value. After the voice message of the attention target player A is played, detecting the coordinate of the sight center point of the virtual character, and controlling to play the voice message sent to the virtual character by the attention target player A again when the sight center point of the virtual character still falls on the player A; when it is detected that the center point of the line of sight of the virtual character is away from player a, step S1 is executed to perform a new detection. Therefore, the user can screen and repeatedly listen to the voice message, and the voice message sent to the virtual role by the attention target is prevented from being missed to listen.
In order to implement the recording and voice information sending functions of the virtual character, in this exemplary embodiment, the voice interaction method may further include steps S21 to S23:
and S21, detecting whether a recording trigger operation is received or not, and entering a recording preparation mode when the recording trigger operation is detected to be received.
For example, the recording triggering operation may be: and setting a recording trigger area, and judging to trigger the recording operation when detecting that the sight center point of the virtual character falls in the recording trigger area. Or the user sets the recording trigger action in advance and detects the recording trigger action through auxiliary equipment, for example, the following steps are set: continuously counting for a plurality of times and triggering the recording operation; when the action is detected, the recording operation is determined to be triggered. Alternatively, the recording trigger operation may be triggered by an external device, and the like, which is not particularly limited in this exemplary embodiment.
And S22, detecting whether the sight center point of the virtual character falls on the interactive object or not in the recording preparation mode. This detection step is similar to step S1 described above, and therefore will not be described here.
And S23, when the fact that the sight center point of the virtual character falls on the interactive object is detected, taking the interactive object as a voice receiver and starting to record a voice message to be sent.
Referring to fig. 4, after detecting that a recording trigger operation is received, entering a recording preparation mode; under a recording preparation mode, detecting that the sight center point of the virtual character falls on an interactive object: when the player 123 is on, the player 123 can be used as a voice receiver, and simultaneously, the recording of the voice message to be sent is started. The receiver of the voice message is selected by utilizing the sight center point of the virtual role, so that the operation is finished without operating an entity key or a virtual control, and the operation is effectively simplified. Therefore, the method also conforms to the communication mode of people in the real environment, can simulate the chat scene in the real environment more truly, and deepens the immersion of the user in the virtual environment.
In order to facilitate the sending of the voice message, in this exemplary embodiment, the voice interaction method described above may further include:
s24, after the voice message to be sent is recorded, detecting whether the sight center point of the virtual character still falls on the voice receiver; and when detecting that the sight center point of the virtual character still falls on the voice receiver, sending the voice message to be sent to the voice receiver.
Specifically, referring to the scenario shown in fig. 4, after the voice to be sent is recorded, it is detected whether the sight center point of the virtual character still stays on the player 123; when the fact that the sight line center point of the virtual character still stays on the player 123 is detected, the recorded voice message to be sent is sent to the player 123; when it is detected that the center point of the line of sight of the virtual character is not on the player 123, the transmission of the voice message to be transmitted is cancelled.
After the voice receiver is judged, whether the visual center point of the virtual character stays on the voice receiver or not is detected to judge the sending action of the voice message to be sent, so that on one hand, the manual operation of a user on an entity button or a virtual control can be reduced, on the other hand, the game progress of the user in a virtual environment can be smoother, and the user experience is improved.
Of course, in other exemplary embodiments of the present disclosure, the sending control of the voice message to be sent may also adopt other control actions, for example: nodding a voice message, shaking a head to cancel sending a voice, etc., are also within the scope of the present disclosure. Meanwhile, when the virtual character sends a voice message to an interactive object in the visual field range, a prompt mark can be set.
In addition, in this exemplary embodiment, the voice interaction method described above may further include:
and before the voice message to be sent is recorded, keeping the voice receiver positioned in the visual field range of the virtual role.
During the game, the interactive object may move, and when the virtual character is triggered to send a voice message to the interactive object or when the interactive object sends a voice message to the virtual character, the interactive object is controlled to remain in the visual field of the virtual character until the virtual character leaves first. Therefore, when the virtual character and the interactive object carry out voice interaction, the virtual character does not need to move along with the interactive object, so that the voice information can be conveniently sent and received, and the game progress of a user can be conveniently and smoothly carried out.
It is to be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 5, in an embodiment of the present example, a voice interaction apparatus is further provided, which includes a first line of sight detecting module 101, a target setting module 102, and a volume adjusting module 103. Wherein:
the first gaze detection module 101 may be configured to detect whether a gaze center point of a virtual character falls on an interactive object that sends a voice message to the virtual character within a virtual display scene.
The target setting module 102 may be configured to, when detecting that a center point of a line of sight of the virtual character falls on an interactive object that sends a voice message to the virtual character, take the interactive object as a target of attention; and taking other interactive objects which send voice messages to the virtual character in the visual field range of the virtual character as non-attention targets.
The volume adjusting module 103 may be configured to increase the playing volume of the voice message sent by the attention target to the virtual character, and decrease the playing volume of the voice message sent by the non-attention target to the virtual character.
In the present exemplary embodiment, as shown with reference to fig. 6, the volume adjustment module described above includes: a first distance acquisition unit 1021, and a volume attenuation control unit 1022. Wherein:
the first distance acquiring unit 1021 may be configured to acquire coordinates of the attention target and the non-attention target in the virtual reality scene and calculate a distance between the non-attention target and the attention target.
The volume attenuation control unit 1022 may be configured to obtain a volume attenuation coefficient according to a distance between the non-attention target and the attention target, and reduce the playing volume of the voice message sent by the non-attention target to the virtual character according to the volume attenuation coefficient.
In other exemplary embodiments of the present disclosure, the volume attenuation coefficient is calculated according to the following formula:
K=a*S*S+b
and K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is more than 0, and b is a constant and b is more than or equal to 0.
In this exemplary embodiment, the voice interaction apparatus described above may further include: the device comprises an information receiving module and an information playing control module. Wherein:
the information receiving module can be used for receiving voice messages sent by each interactive object to the virtual character in the visual field range of the virtual character.
The information playing control module may be configured to play the voice message sent by the interactive object to the virtual character when it is determined that the interactive object sending the voice message to the virtual character exists within the visual field of the virtual character.
In this exemplary embodiment, the voice interaction apparatus further includes: and a second distance acquisition module.
The second distance obtaining module may be configured to, when it is determined that the sight center point of the virtual character does not fall on an interactive object that sends a voice message to the virtual character, obtain a distance between each interactive object that sends a voice message to the virtual character within the visual field range of the virtual character and the virtual character, and accordingly adjust a play volume of the voice message sent by each interactive object to the virtual character.
In other exemplary embodiments of the present disclosure, the voice interaction apparatus further includes: the second sight detection module and the circulating playing control module. Wherein:
the second sight line detection module may be configured to detect whether the sight line center point of the virtual character still falls on the attention target after the voice sent by the attention target to the virtual character is played.
The loop playing control module can be used for replaying the voice message sent by the attention target to the virtual character when the fact that the sight line center point of the virtual character still falls on the attention target is detected.
In this exemplary embodiment, the above-mentioned voice interaction apparatus further includes: and a position control module.
The position control module may be configured to keep the voice receiver located within the visual field of the virtual character before the voice message to be sent is recorded.
In this exemplary embodiment, the above-mentioned voice interaction apparatus further includes: and an identification setting module.
The identifier setting module can be used for setting a prompt identifier for each interactive object which sends the voice message to the virtual character within the visual field range of the virtual character.
In other exemplary embodiments of the present disclosure, the voice interaction apparatus described above may further include: the device comprises a recording triggering module, a third sight line detection module and a recording module.
The recording trigger module may be configured to detect whether a recording trigger operation is received, and enter a recording preparation mode when the recording trigger operation is detected to be received.
The third sight line detection module may be configured to detect whether a sight line center point of the virtual character falls on one of the interactive objects in the recording preparation mode.
The recording module may be configured to, when it is detected that the sight center point of the virtual character falls on one of the interactive objects, take the interactive object as a voice receiver and start recording a voice message to be sent.
In other exemplary embodiments of the present disclosure, the voice interaction apparatus described above may further include: the fourth sight detection module and the information sending module. Wherein:
the fourth sight line detection module may be configured to detect whether the sight line center point of the virtual character still falls on the voice receiver after the voice message to be sent is recorded.
The information sending module may be configured to send the voice message to be sent to the voice receiver when it is detected that the sight center point of the virtual character still falls on the voice receiver.
The specific details of each virtual reality interaction device unit and the virtual reality interaction system are already described in detail in the corresponding virtual reality interaction method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Further, the present exemplary embodiment also provides a virtual reality system, which at least includes the voice interaction apparatus in the above exemplary embodiments. Because the voice interaction device can adjust the playing volume of language information sent to the virtual character by each interaction object in the visual field range of the virtual character in the virtual reality environment according to the position of the sight center point of the virtual character, the control operation of receiving, playing, recording and sending the voice message is simplified, and therefore the virtual reality system in the embodiment can enable a user to operate more simply in a virtual reality game, can simulate a real scene more truly, improves immersion feeling and improves user experience.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (21)
1. A method of voice interaction, comprising:
detecting whether a sight center point of a virtual character in a virtual reality scene falls on an interactive object which sends a voice message to the virtual character;
when the sight center point of the virtual character is judged to be located on an interactive object which sends a voice message to the virtual character, the interactive object is used as an attention target, and other interactive objects which send the voice message to the virtual character in the visual field range of the virtual character are used as non-attention targets;
increasing the playing volume of the voice message sent by the concerned target to the virtual character, and reducing the playing volume of the voice message sent by the non-concerned target to the virtual character;
and responding to the trigger operation to enter a recording preparation mode so as to determine a voice receiver according to the sight center point of the virtual character and record a voice message to be sent.
2. The voice interaction method of claim 1, further comprising:
judging whether an interactive object for sending a voice message to the virtual character exists in the visual field range of the virtual character;
and when judging that an interactive object for sending the voice message to the virtual character exists in the visual field range of the virtual character, automatically playing the voice message sent to the virtual character by the interactive object.
3. The voice interaction method according to claim 1 or 2, wherein reducing the playback volume of the voice message sent by the non-attention target to the virtual character comprises:
acquiring coordinates of the attention target and a non-attention target in the virtual reality scene and calculating the distance between the non-attention target and the attention target according to the coordinates;
and obtaining a volume attenuation coefficient according to the distance between the non-attention target and the attention target, and reducing the playing volume of the voice message sent to the virtual role by the non-attention target according to the volume attenuation coefficient.
4. The voice interaction method of claim 3, wherein the volume attenuation factor is calculated according to the following formula:
K=a*S*S+b
wherein K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is greater than 0, and b is a constant and b is greater than or equal to 0.
5. The voice interaction method of claim 1, further comprising:
and when the sight center point of the virtual character is judged not to fall on the interactive object for sending the voice message to the virtual character, adjusting the playing volume of the voice message sent to the virtual character by each interactive object according to the distance between each interactive object for sending the voice message to the virtual character in the visual field range of the virtual character and the virtual character.
6. The voice interaction method of claim 1, further comprising:
after the voice sent to the virtual character by the attention target is played, detecting whether the sight center point of the virtual character still falls on the attention target;
and when the fact that the sight center point of the virtual character still falls on the attention target is detected, the voice message sent to the virtual character by the attention target is replayed.
7. The voice interaction method of claim 1, further comprising:
and setting a prompt identifier for each interactive object sending the voice message to the virtual character within the visual field range of the virtual character.
8. The voice interaction method of claim 1, wherein entering a recording preparation mode in response to the trigger operation to determine a voice receiver according to the center point of line of sight of the virtual character and record a voice message to be sent comprises:
detecting whether a recording trigger operation is received or not, and entering a recording preparation mode when the recording trigger operation is detected to be received;
detecting whether a sight center point of the virtual character falls on the interactive object or not in the recording preparation mode;
and when the sight center point of the virtual character is detected to fall on one interactive object, taking the interactive object as a voice receiver and starting to record a voice message to be sent.
9. The voice interaction method of claim 8, further comprising:
after the voice message to be sent is recorded, detecting whether the sight center point of the virtual character still falls on the voice receiver;
and when the fact that the sight center point of the virtual character still falls on the voice receiver is detected, the voice message to be sent is sent to the voice receiver.
10. The voice interaction method according to claim 8 or 9, further comprising:
and before the voice message to be sent is recorded, keeping the voice receiver positioned in the visual field range of the virtual role.
11. A voice interaction apparatus, comprising:
the system comprises a first sight line detection module, a second sight line detection module and a third sight line detection module, wherein the first sight line detection module is used for detecting whether a sight line central point of a virtual character falls on an interactive object which sends a voice message to the virtual character in a virtual reality scene;
the target setting module is used for taking an interactive object as a concerned target when detecting that the sight center point of the virtual character falls on the interactive object which sends the voice message to the virtual character; taking other interactive objects which send voice messages to the virtual character within the visual field range of the virtual character as non-attention targets;
the volume adjusting module is used for increasing the playing volume of the voice message sent by the concerned target to the virtual role and reducing the playing volume of the voice message sent by the non-concerned target to the virtual role;
and the recording preparation module is used for responding to the triggering operation to enter a recording preparation mode so as to determine a voice receiver according to the sight center point of the virtual role and record the voice message to be sent.
12. The voice interaction device of claim 11, further comprising:
the information receiving module is used for receiving voice messages sent to the virtual character by each interactive object in the visual field range of the virtual character;
and the information playing control module is used for playing the voice message sent to the virtual character by the interactive object when judging that the interactive object for sending the voice message to the virtual character exists in the visual field range of the virtual character.
13. The voice interaction device of claim 11 or 12, wherein the volume adjustment module comprises:
a first distance acquisition unit, configured to acquire coordinates of the attention target and a non-attention target in the virtual reality scene and calculate a distance between the non-attention target and the attention target according to the coordinates;
and the volume attenuation control unit is used for obtaining a volume attenuation coefficient according to the distance between the non-attention target and the attention target, and reducing the playing volume of the voice message sent to the virtual character by the non-attention target according to the volume attenuation coefficient.
14. The voice interaction device of claim 13, wherein the volume attenuation factor is calculated according to the following formula:
K=a*S*S+b
and K is the volume attenuation coefficient, S is the distance between the non-concerned target and the concerned target, a is a constant and a is more than 0, and b is a constant and b is more than or equal to 0.
15. The voice interaction device of claim 11, further comprising:
and the second distance acquisition module is used for acquiring the distance between each interactive object for sending the voice message to the virtual character within the visual field range of the virtual character and the virtual character when the sight center point of the virtual character is judged not to fall on the interactive object for sending the voice message to the virtual character, so as to adjust the playing volume of the voice message sent to the virtual character by each interactive object.
16. The voice interaction device of claim 11, further comprising:
the second sight line detection module is used for detecting whether the sight line central point of the virtual character still falls on the attention target or not after the voice sent by the attention target to the virtual character is played;
and the circulating playing control module is used for replaying the voice message sent to the virtual role by the attention target when the fact that the sight center point of the virtual role still falls on the attention target is detected.
17. The voice interaction device of claim 11, further comprising:
and the identification setting module is used for setting a prompt identification for each interactive object which sends the voice message to the virtual character within the visual field range of the virtual character.
18. The voice interaction device of claim 11, wherein the recording preparation module further comprises:
the recording trigger module is used for detecting whether a recording trigger operation is received or not and entering a recording preparation mode when the recording trigger operation is detected to be received;
the third sight line detection module is used for detecting whether a sight line central point of the virtual character falls on the interactive object or not in the recording preparation mode;
and the recording module is used for taking the interactive object as a voice receiver and starting recording the voice message to be sent when the fact that the sight center point of the virtual character falls on the interactive object is detected.
19. The voice interaction device of claim 18, further comprising:
the fourth sight line detection module is used for detecting whether the sight line central point of the virtual role still falls on the voice receiver after the voice message to be sent is recorded;
and the information sending module is used for sending the voice message to be sent to the voice receiver when the fact that the sight center point of the virtual role still falls on the voice receiver is detected.
20. The voice interaction device of claim 18 or 19, further comprising:
and the position control module is used for keeping the voice receiver positioned in the visual field range of the virtual role before the voice message to be sent is recorded.
21. A virtual reality system, comprising a voice interaction device according to any one of claims 11 to 20.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611027164.9A CN106774830B (en) | 2016-11-16 | 2016-11-16 | Virtual reality system, voice interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611027164.9A CN106774830B (en) | 2016-11-16 | 2016-11-16 | Virtual reality system, voice interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106774830A CN106774830A (en) | 2017-05-31 |
CN106774830B true CN106774830B (en) | 2020-04-14 |
Family
ID=58970281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611027164.9A Active CN106774830B (en) | 2016-11-16 | 2016-11-16 | Virtual reality system, voice interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106774830B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107562201B (en) * | 2017-09-08 | 2020-07-07 | 网易(杭州)网络有限公司 | Directional interaction method and device, electronic equipment and storage medium |
CN109729109B (en) * | 2017-10-27 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Voice transmission method and device, storage medium and electronic device |
CN108091330B (en) * | 2017-12-13 | 2020-11-27 | 北京小米移动软件有限公司 | Output sound intensity adjusting method and device, electronic equipment and storage medium |
CN113965542B (en) * | 2018-09-30 | 2022-10-04 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for displaying sound message in application program |
CN111475022A (en) * | 2020-04-03 | 2020-07-31 | 上海唯二网络科技有限公司 | Method for processing interactive voice data in multi-person VR scene |
CN111530089B (en) * | 2020-04-23 | 2023-08-22 | 深圳市朗形数字科技有限公司 | Multimedia VR interaction method and system |
CN111722712A (en) * | 2020-06-09 | 2020-09-29 | 三星电子(中国)研发中心 | Method and device for controlling interaction in augmented reality |
CN112162638B (en) * | 2020-10-09 | 2023-09-19 | 咪咕视讯科技有限公司 | Information processing method and server in Virtual Reality (VR) viewing |
CN112289116B (en) * | 2020-11-04 | 2022-07-26 | 北京格如灵科技有限公司 | Court rehearsal system under virtual reality environment |
CN113398590B (en) * | 2021-07-14 | 2024-04-30 | 网易(杭州)网络有限公司 | Sound processing method, device, computer equipment and storage medium |
CN114049871A (en) * | 2022-01-13 | 2022-02-15 | 腾讯科技(深圳)有限公司 | Audio processing method and device based on virtual space and computer equipment |
CN114627878B (en) * | 2022-05-12 | 2022-08-02 | 深圳市一恒科电子科技有限公司 | Voice interaction method and system based on data processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101669090A (en) * | 2007-04-26 | 2010-03-10 | 福特全球技术公司 | Emotive advisory system and method |
CN102209225A (en) * | 2010-03-30 | 2011-10-05 | 华为终端有限公司 | Method and device for realizing video communication |
CN103995685A (en) * | 2013-02-15 | 2014-08-20 | 精工爱普生株式会社 | Information processing device and control method for information processing device |
CN105325014A (en) * | 2013-05-02 | 2016-02-10 | 微软技术许可有限责任公司 | Sound field adaptation based upon user tracking |
CN105487657A (en) * | 2015-11-24 | 2016-04-13 | 小米科技有限责任公司 | Sound loudness determination method and apparatus |
-
2016
- 2016-11-16 CN CN201611027164.9A patent/CN106774830B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101669090A (en) * | 2007-04-26 | 2010-03-10 | 福特全球技术公司 | Emotive advisory system and method |
CN102209225A (en) * | 2010-03-30 | 2011-10-05 | 华为终端有限公司 | Method and device for realizing video communication |
CN103995685A (en) * | 2013-02-15 | 2014-08-20 | 精工爱普生株式会社 | Information processing device and control method for information processing device |
CN105325014A (en) * | 2013-05-02 | 2016-02-10 | 微软技术许可有限责任公司 | Sound field adaptation based upon user tracking |
CN105487657A (en) * | 2015-11-24 | 2016-04-13 | 小米科技有限责任公司 | Sound loudness determination method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN106774830A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106774830B (en) | Virtual reality system, voice interaction method and device | |
US10642569B2 (en) | Methods and devices for identifying object in virtual reality communication, and virtual reality equipment | |
CN113082712B (en) | Virtual character control method, device, computer equipment and storage medium | |
US20180300037A1 (en) | Information processing device, information processing method, and program | |
CN111045511B (en) | Gesture-based control method and terminal equipment | |
US9776088B2 (en) | Apparatus and method of user interaction | |
KR20100021387A (en) | Apparatus and method to perform processing a sound in a virtual reality system | |
JP7247260B2 (en) | DATA PROCESSING PROGRAM, DATA PROCESSING METHOD, AND DATA PROCESSING APPARATUS | |
US20170235462A1 (en) | Interaction control method and electronic device for virtual reality | |
CN113398590B (en) | Sound processing method, device, computer equipment and storage medium | |
US11270087B2 (en) | Object scanning method based on mobile terminal and mobile terminal | |
CN109529340B (en) | Virtual object control method and device, electronic equipment and storage medium | |
CN114449162B (en) | Method, device, computer equipment and storage medium for playing panoramic video | |
CN108462729A (en) | Realize method and apparatus, terminal device and the server of terminal device interaction | |
CN112774185A (en) | Virtual card control method, device and equipment in card virtual scene | |
CN115225926B (en) | Game live broadcast picture processing method, device, computer equipment and storage medium | |
US20240211128A1 (en) | Game interface interaction method, system, and computer readable storage medium | |
CN115888094A (en) | Game control method, device, terminal equipment and storage medium | |
JP2023547721A (en) | Screen display methods, devices, equipment, and programs in virtual scenes | |
KR102205901B1 (en) | Method for providing augmented reality, and the computing device | |
WO2020243953A1 (en) | Control method for remote control movable platform, device and computer-readable storage medium | |
CN109542618A (en) | Control method of electronic device and device | |
EP4270155A1 (en) | Virtual content | |
US20240177435A1 (en) | Virtual interaction methods, devices, and storage media | |
CN114968440A (en) | Instant messaging message processing method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |