CN111475022A - Method for processing interactive voice data in multi-person VR scene - Google Patents

Method for processing interactive voice data in multi-person VR scene Download PDF

Info

Publication number
CN111475022A
CN111475022A CN202010260564.4A CN202010260564A CN111475022A CN 111475022 A CN111475022 A CN 111475022A CN 202010260564 A CN202010260564 A CN 202010260564A CN 111475022 A CN111475022 A CN 111475022A
Authority
CN
China
Prior art keywords
voice
terminal
player
online
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010260564.4A
Other languages
Chinese (zh)
Inventor
邢维振
尹桑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weier Network Technology Co ltd
Original Assignee
Shanghai Weier Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weier Network Technology Co ltd filed Critical Shanghai Weier Network Technology Co ltd
Priority to CN202010260564.4A priority Critical patent/CN111475022A/en
Priority to PCT/CN2020/088827 priority patent/WO2021196337A1/en
Publication of CN111475022A publication Critical patent/CN111475022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of virtual reality, and discloses a method for processing interactive voice data in a multi-person VR scene, which comprises the steps of receiving voice data from a voice sending terminal; searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal; and forwarding the voice data to the found voice receiving terminal. Therefore, the mode of the voice receiving player can be determined through the spatial position relation among the players in the multi-player VR scene, so that the voice data sent by the voice sending terminal can be received only by the online terminals around the voice sending terminal, the VR reality degree is ensured, meanwhile, the received voice data can be effectively reduced by other non-surrounding online terminals, the problem that voice content cannot be distinguished due to simultaneous multi-party sound mixing is avoided, and the multi-player VR scene experience of the player is greatly improved.

Description

Method for processing interactive voice data in multi-person VR scene
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a method for processing interactive voice data in a multi-person VR scene.
Background
With the rapid development of communication technology and computer technology, VR (Virtual Reality) technology is gradually exploded. The virtual reality VR technology is a computer simulation system capable of creating and experiencing a virtual world, a simulation environment is generated by a computer, and the virtual reality VR technology is also a system simulation of multi-source information fusion and interactive three-dimensional dynamic views and entity behaviors, so that a user can be immersed in the environment. At present, the VR technology is widely applied to scenes such as movies, virtual reality games and painting, and the most convenient realization mode is that a smartphone is matched with VR glasses and then a headset is matched to realize a virtual audio-visual effect. The experience mode has the advantages of cost, improvement of personal complete immersion, limitation to single VR experience, and capability of obtaining corresponding experience in scenes such as multi-player VR games or multi-player VR conferences.
In the existing multi-person VR scene, due to the limitation of equipment, the communication method between players cannot be performed through inputting characters through a keyboard as conveniently as on a computer, so performing real-time voice is the mainstream means for communication in the multi-person VR scene. However, when real-time voice is performed in a multi-user VR scene, the problem that the voice is not clear and difficult to recognize due to simultaneous receiving of multi-user voice and sound mixing display exists.
Disclosure of Invention
In order to solve the problems of unclear voice and difficulty in recognition caused by simultaneous receiving of multi-person voice and sound mixing display in the existing multi-person VR scene, the invention aims to provide a method and a device for processing interactive voice data in the multi-person VR scene, computer equipment, terminal equipment and a computer storage medium.
In a first aspect, the present invention provides a method for processing interactive voice data in a multi-person VR scenario, which is suitable for being executed on a server side, and includes:
receiving voice data from a voice sending terminal;
searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal;
and forwarding the voice data to the found voice receiving terminal.
Based on the invention, the mode of the voice receiving player can be determined through the spatial position relation among the players in the multi-player VR scene, so that the voice data sent by the voice sending terminal can be received only by the online terminals around the voice sending terminal, therefore, the received voice data can be effectively reduced by other non-surrounding online terminals while the VR reality degree is ensured, the problem that the voice content cannot be distinguished due to simultaneous multi-party sound mixing is avoided, and the experience of the multi-player VR scene of the player is greatly improved. In addition, the transmission quantity of voice data can be greatly reduced, the network pressure can be relieved, the probability of voice data loss during data transmission can be reduced, and the problems of discontinuous voice, delay and poor quality caused by excessive data loss can be avoided.
In one possible design, for an online terminal of a non-voice sending terminal, whether the online terminal is located around the voice sending terminal is determined as follows: and judging whether the online terminal is positioned in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the online terminal, if so, judging that the online terminal is a first type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the first sector area is the view angle of the voice sending terminal corresponding to the current player in a multi-player VR scene. Through the design, if the online terminal is located in the first sector area, it is indicated that the corresponding online player is located in front of the voice-emitting player in the multi-player VR scene, and if the online terminal is used as the voice receiving terminal of the voice data, the real scene can be greatly matched, and the VR reality degree is improved.
In one possible design, if the online terminal is not located in the first sector, the method further includes: and judging whether the online terminal is positioned in a second sector area which takes the current player position of the voice sending terminal as the center and takes a second distance as the radius or not according to the current player position of the online terminal, if so, judging that the online terminal is a second voice receiving terminal positioned around the voice sending terminal, wherein the second distance is smaller than the first distance, and the sector angle of the second sector area is the corresponding current non-player visual angle of the voice sending terminal in a multi-player VR scene. Through the design, if the online terminal is located in the second sector area, it is indicated that the corresponding online player is located behind the voice-emitting player in the multi-player VR scene, and if the online terminal is used as the voice receiving terminal of the voice data, the online terminal can also be greatly matched with the real scene, so that the VR reality degree is further improved.
In one possible design, for an online terminal of a non-voice sending terminal, whether the online terminal is located around the voice sending terminal is determined as follows: and judging whether the voice sending terminal is positioned in a third sector area which takes the current player position of the online terminal as the center and takes a third distance as the radius or not according to the current player position of the voice sending terminal, if so, judging that the online terminal is a third type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player visual angle of the online terminal in a multi-player VR scene. Through the design, the online terminal can select the target to be listened to or highlight the voice data of the target of interest according to the visual angle of the receiver, and the listening freedom degree of the online player is improved.
In one possible design, if the voice-emitting terminal is not located in the third sector area, the method further includes: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the online terminal is a fourth voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the online terminal in a multi-player VR scene.
In one possible design, for the found second type voice receiving terminal or the found fourth type voice receiving terminal, attenuation processing is performed on the voice data before forwarding the voice data. Through aforementioned design, can further promote virtual reality VR's true degree.
In a second aspect, the present invention provides a device for processing interactive voice data in a multi-user VR scene, including a voice data receiving unit, a receiving terminal searching unit and a voice data forwarding unit, which are sequentially connected in a communication manner;
the voice data receiving unit is used for receiving voice data from a voice sending terminal;
the receiving terminal searching unit is used for searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal;
and the voice data forwarding unit is used for forwarding the voice data to the found voice receiving terminal.
In a third aspect, the present invention provides a computer device, comprising a memory, a processor and a transceiver, which are communicatively connected in sequence, wherein the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the method as described in the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon instructions which, when executed on a computer, perform the method as set forth in the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described above in the first aspect or any one of the possible designs of the first aspect.
In a sixth aspect, the present invention provides a method for processing interactive voice data in a multi-person VR scenario, which is suitable for being executed on a terminal side, and includes:
receiving voice data forwarded by the server and coming from the voice sending terminal;
and judging whether the local online terminal is a voice receiving terminal positioned around the voice sending terminal or not according to the local online terminal and the position of the corresponding current player of the voice sending terminal in a multi-player VR scene, if so, outputting and displaying the voice data, and otherwise, stopping displaying the voice data.
Based on the invention, the mode of the voice receiving player can be determined through the spatial position relation among the players in the multi-player VR scene, so that the voice data sent by the voice sending terminal can be displayed only by the online terminals around the voice sending terminal, therefore, the display of the voice data by the local online terminal can be effectively reduced while the VR reality degree is ensured, the problem that the voice content cannot be distinguished due to simultaneous multi-party sound mixing is avoided, and the multi-player VR scene experience of the players is greatly improved.
In one possible design, whether the local online terminal is located around the voice sending terminal is determined as follows: and judging whether the local online terminal is positioned in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the local online terminal, if so, judging that the local online terminal is a first type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the first sector area is the corresponding current player visual angle of the voice sending terminal in a multi-player VR scene. Through the design, if the local online terminal is located in the first sector area, the fact that the local online player is located in front of the voice-emitting player in the multi-player VR scene is shown, and if the local online terminal is used as the voice receiving terminal of the voice data, the real scene can be greatly matched, and the VR reality degree is improved.
In one possible design, if the local online terminal is not located in the first sector, the method further includes: and judging whether the local online terminal is positioned in a second sector area which takes the current player position of the voice sending terminal as the center and takes a second distance as the radius according to the current player position of the local online terminal, if so, judging that the local online terminal is a second type of voice receiving terminal positioned around the voice sending terminal, wherein the second distance is smaller than the first distance, and the sector angle of the second sector area is the corresponding current non-player visual angle of the voice sending terminal in a multi-player VR scene. Through the design, if the local online terminal is located in the second sector area, it is indicated that the local online player is located behind the voice-emitting player in the multi-player VR scene, and if the local online terminal is used as the voice receiving terminal of the voice data, the local online terminal can be greatly matched with the real scene, so that the VR reality degree is further improved.
In one possible design, whether the local online terminal is located around the voice sending terminal is determined as follows: and judging whether the voice sending terminal is positioned in a third sector area which takes the current player position of the local online terminal as the center and takes a third distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a third type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player visual angle of the local online terminal in a multi-player VR scene. Through the design, the local online terminal can select the target to be listened to or highlight the voice data of the target concerned according to the visual angle of the receiver, and the listening freedom degree of the local online player is improved.
In one possible design, if the voice-emitting terminal is not located in the third sector area, the method further includes: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the local online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a fourth type voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the local online terminal in a multi-player VR scene.
In a seventh aspect, the present invention provides an apparatus for processing interactive voice data in a multi-user VR scenario, including a receiving unit and a presenting unit, which are communicatively connected;
the receiving unit is used for receiving the voice data forwarded by the server and coming from the voice sending terminal;
and the display unit is used for judging whether the local online terminal is a voice receiving terminal positioned around the voice sending terminal or not according to the local online terminal and the position of the corresponding current player of the voice sending terminal in a multi-player VR scene, outputting and displaying the voice data if the local online terminal is the voice receiving terminal, and stopping displaying the voice data if the local online terminal is not the voice receiving terminal.
In an eighth aspect, the present invention provides a terminal device, comprising a memory, a processor and a transceiver, which are communicatively connected in sequence, wherein the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the method as designed in any one of the sixth aspect or the sixth aspect.
In a ninth aspect, the present invention provides a computer readable storage medium having stored thereon instructions which, when run on a computer, perform the method as set forth in any one of the possible designs of the sixth aspect or the sixth aspect above.
In a tenth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described in any one of the possible designs of the sixth aspect or the sixth aspect above.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a first method for processing interactive voice data in a multi-person VR scene according to the present invention.
FIG. 2 is an exemplary diagram of a voice-emitting player and other online players in a multi-player VR scenario as provided by the present invention.
Fig. 3 is a schematic structural diagram of a first apparatus for processing interactive voice data in a multi-person VR scene according to the present invention.
Fig. 4 is a schematic structural diagram of a computer device provided by the present invention.
Fig. 5 is a flowchart illustrating a second method for processing interactive voice data in a multi-person VR scene according to the present invention.
Fig. 6 is a schematic structural diagram of a second apparatus for processing interactive voice data in a multi-person VR scene according to the present invention.
Fig. 7 is a schematic structural diagram of a terminal device provided by the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example one
As shown in fig. 1, the method for processing interactive voice data in a multi-person VR scene provided in this embodiment is suitable for being executed on a server side, and may include, but is not limited to, the following steps S101 to S103.
S101, receiving voice data from a voice sending terminal.
In the step S101, the voice sending terminal is an electronic device held by a voice-sending player and used for participating in a multi-player VR scene experience, which may include, but is not limited to, a smart phone or a VR experience machine. The voice data is obtained by collecting real-time voice of a voice-emitting player by the voice-sending terminal (for example, collected by a configured sound collector), and then is transmitted to a server for maintaining a multi-player VR scene experience through the Internet.
S102, searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal.
In step S102, the online terminal is an electronic device held by different players and used for participating in a multi-player VR scene experience, and like the voice sending terminal, the online terminal may include, but is not limited to, a smart phone or a VR experience machine. Because the server is an existing VR server used for maintaining experience of a multi-player VR scene, the server has the corresponding current player positions of all online terminals in the multi-player VR scene, then the relative distances between a voice-emitting player and other online players in the multi-player VR scene can be determined according to the current player positions, meanwhile, the voice emitted by the voice-emitting player can be heard by other players around the voice-emitting player in a real scene, and finally, the corresponding online terminals of the online players can be determined according to the relative distances, namely, voice receiving terminals around the voice-emitting terminal are searched, and the searched voice receiving terminals can receive the voice data.
And S103, forwarding the voice data to the found voice receiving terminal.
In step S103, the forwarding manner is still transmitted through the internet. In addition, if at least one voice receiving terminal located around the voice sending terminal is not found, which indicates that no other online players are located around the voice sending player in the multi-player VR scenario, it is not necessary for the online terminals of these online players to acquire and present the voice data, and therefore forwarding of the voice data is terminated.
Therefore, based on the server-side processing manner described in the foregoing steps S101 to S103, the manner of receiving a player by a voice can be determined according to the spatial position relationship between players in a multi-player VR scene, so that only the online terminals located around the voice sending terminal can receive the voice data sent by the voice sending terminal, thereby ensuring the VR reality degree, effectively reducing the received voice data by other non-surrounding online terminals, avoiding the problem that voice content cannot be distinguished due to simultaneous mixing of multiple parties, and greatly improving the experience of the multi-player VR scene of the player. In addition, the transmission quantity of voice data can be greatly reduced, the network pressure can be relieved, the probability of voice data loss during data transmission can be reduced, and the problems of discontinuous voice, delay and poor quality caused by excessive data loss can be avoided.
Example two
In this embodiment, on the basis of the first embodiment, a technical solution of how to find a voice receiving terminal located around the voice sending terminal is specifically provided, that is, in step S102, for an online terminal of a certain non-voice sending terminal, whether the online terminal is located around the voice sending terminal is determined as follows: and judging whether the online terminal is positioned in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the online terminal, if so, judging that the online terminal is a first type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the first sector area is the view angle of the voice sending terminal corresponding to the current player in a multi-player VR scene.
In the above manner, the current perspective of the voice sending terminal corresponding to the player is the current perspective of the voice sending player in the multi-player VR scene, such as the perspective α in fig. 2, and the parameters of the perspective are transmitted to the server by the voice sending terminal in real time when the multi-player VR scene is experienced.
Considering that other online players located behind the voice-emitting player can also hear the voice-emitting player in the real scene, and thus to further increase VR reality, if the online terminal is not located within the first sector, further comprising determining, based on the current player position of the online terminal, whether the online terminal is located within a second sector centered on the current player position of the voice-emitting terminal and having a radius of a second distance, and if so, determining that the online terminal is a second type of voice-receiving terminal located around the voice-emitting terminal, wherein the second distance is less than the first distance, and the sector angle of the second sector is the corresponding current non-player perspective of the voice-emitting terminal in the multi-player VR scene, the corresponding current non-player perspective of the voice-emitting terminal being the opposite perspective of the current perspective of the voice-emitting player in the multi-player VR scene, as in fig. 2 perspective β, parameters of which may be based on the corresponding current player perspective of the voice-emitting terminal, such that the primary direction of the voice-emitting player is the forward direction of the multi-player would be significantly less than if the virtual player would be located in the front of the VR scene, thereby further increasing VR reality would be required to increase the virtual player if the virtual player would be located within the virtual on the virtual reality range of the VR scene.
Considering that in a real scene, a sound propagation path behind a voice-emitting player includes complex ways such as diffraction, reflection, or diffusion, and further causes phenomena such as significant signal fading and noise enhancement, so to further improve the reality of the virtual reality VR, the method further includes: and aiming at the searched second type voice receiving terminal, before forwarding the voice data, performing attenuation processing on the voice data. The specific way of the attenuation processing is the existing way, and may include but not limited to the way of reducing the volume and/or inserting noise.
For example, as shown in fig. 2, in a multi-player VR scenario, there are a voice-emitting player a corresponding to a voice-emitting terminal and online players B, C, D, E, F and G corresponding to other online terminals, wherein the current player perspective α of the voice-emitting player a is 36 degrees and the current non-player perspective β is 324 degrees (i.e., 360-36 degrees), so that the online player in the first sector has an online player B, the online terminal corresponding to the online player B is a first type voice receiving terminal, which can receive and display the complete voice data, while the online player in the second sector has an online player C, the online terminal corresponding to the online player C is a second type voice receiving terminal, which can receive and display the voice data after attenuation processing, as for the other players D, E, F and G, the voice data is not in the first sector or the second sector due to an excessive distance in the multi-player VR scenario, and thus the amount of online data transmission is reduced, which leads to a problem that the network contents of the online players cannot be mixed and the network cannot be distinguished.
EXAMPLE III
In this embodiment, on the basis of the first embodiment, another technical solution that is different from the second embodiment and how to find a voice receiving terminal located around the voice sending terminal is further specifically provided, that is, in the step S102, for an online terminal of a certain non-voice sending terminal, whether the online terminal is located around the voice sending terminal is determined as follows: and judging whether the voice sending terminal is positioned in a third sector area which takes the current player position of the online terminal as the center and takes a third distance as the radius or not according to the current player position of the voice sending terminal, if so, judging that the online terminal is a third type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player visual angle of the online terminal in a multi-player VR scene.
In the above manner, the current player perspective corresponding to the online terminal is the current perspective of the online player in the multi-player VR scene, and the parameters of the perspective are also transmitted to the server by the online terminal in real time when the online terminal experiences the multi-player VR scene. Although the above-mentioned technical solution that determines whether the on-line terminal is located around the voice sending terminal based on the current player perspective corresponding to the on-line terminal will result in that the VR reality is inferior to the second embodiment because of the incomplete matching of the sound propagation principle, the on-line terminal can select the target to be listened to or highlight the voice data of the target concerned according to the receiver perspective, thereby improving the listening freedom of the on-line player.
Similar to the second embodiment, if the voice-emitting terminal is not located in the third sector area, the method further includes: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the online terminal is a fourth voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the online terminal in a multi-player VR scene. The current non-player perspective corresponding to the online terminal is a perspective opposite to the current perspective of the online player in the multi-player VR scene, and the parameters of the perspective can also be obtained based on the current player perspective corresponding to the online terminal. In addition, for the found fourth type of voice receiving terminal, attenuation processing may also be performed on the voice data before forwarding the voice data.
Example four
As shown in fig. 3, the present embodiment provides a hardware device for implementing the method for processing interactive voice data in a multi-person VR scene according to any one of the first to third embodiments, including a voice data receiving unit, a receiving terminal searching unit, and a voice data forwarding unit, which are sequentially connected by communication; the voice data receiving unit is used for receiving voice data from a voice sending terminal; the receiving terminal searching unit is used for searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal; and the voice data forwarding unit is used for forwarding the voice data to the found voice receiving terminal.
The working process, working details and technical effects of the foregoing apparatus provided in this embodiment may refer to the method described in any one of the first to third embodiments, and are not described herein again.
EXAMPLE five
As shown in fig. 4, the present embodiment provides a computer device for executing the method for processing interactive voice data in a multi-person VR scenario according to any one of the first to third embodiments, wherein the computer device includes a Memory, a processor and a transceiver, which are sequentially and communicatively connected, wherein the Memory is used for storing a computer program, the transceiver is used for transceiving data, the processor is used for reading the computer program, and executing the method for processing interactive voice data in the multi-person VR scenario according to any one of the first to third embodiments, the Memory may include, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash Memory), a first-in first-out Memory (FIFO), and/or a first-in first-out second-out Memory (FI L O), and the like, and the transceiver may include, but is not limited to, a WiFi (wireless fidelity) wireless transceiver, a bluetooth wireless transceiver, a GPRS (General Packet Radio Service) wireless transceiver, and/or a ZigBee (ZigBee 2.15.4) wireless transceiver, and the transceiver may also include, and the STM may further employ, but not limited to, a ZigBee (ZigBee) wireless transceiver, a computing module, and other computing module.
For the working process, the working details, and the technical effects of the computer device provided in this embodiment, reference may be made to the method described in any one of the first to third embodiments, which is not described herein again.
EXAMPLE six
The present embodiment provides a computer-readable storage medium storing instructions including the method for processing interactive voice data in a multi-person VR scenario according to any one of the first to third embodiments, where the instructions are stored on the computer-readable storage medium, and when the instructions are executed on a computer, the method for processing interactive voice data in a multi-person VR scenario according to any one of the first to third embodiments is performed. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, the working details, and the technical effects of the computer-readable storage medium provided in this embodiment, reference may be made to the method described in any one of the first to third embodiments, which is not described herein again.
EXAMPLE seven
The present embodiment provides a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the method for processing interactive voice data in a multi-person VR scenario as described in any one of the first to third embodiments. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
Example eight
As shown in fig. 5, the method for processing interactive voice data in a multi-person VR scene provided in this embodiment is suitable for being executed on a terminal side, and may include, but is not limited to, the following steps S201 to S202.
S201, receiving voice data forwarded by the server and coming from the voice sending terminal.
In the step S201, the voice sending terminal is an electronic device held by a voice-sending player and used for participating in a multi-player VR scene experience, which may include, but is not limited to, a smart phone or a VR experience machine. The voice data is obtained by collecting real-time voice of a voice-emitting player by the voice-emitting terminal (for example, collected by a configured sound pick-up), then transmitted to a server for maintaining a multi-player VR scene experience through the Internet, and finally transmitted to a local online terminal by the server through the Internet. The local online terminal is an electronic device held by a local player and used for participating in multi-player VR scene experience, and like the voice sending terminal, the local online terminal may include, but is not limited to, a smart phone or a VR experience machine.
S202, judging whether the local online terminal is a voice receiving terminal located around the voice sending terminal or not according to the local online terminal and the position of the corresponding current player of the voice sending terminal in a multi-player VR scene, if so, outputting and displaying the voice data, and otherwise, terminating displaying the voice data.
In the step S202, since the server is an existing VR server for maintaining the experience of the multi-person VR scene, and has the corresponding current player positions of the online terminals in the multi-person VR scene, the local online terminal can easily obtain the corresponding current player position of the local online terminal in the multi-person VR scene and the corresponding current player position of the voice sending terminal in the multi-person VR scene from the server side. Therefore, the local online terminal can determine the relative distance between the voice-emitting player and the local online player in a multi-player VR scene according to the positions of the two current players, and meanwhile, the fact that in a real scene, other players around the voice-emitting player can hear the voice emitted by the voice-emitting player is considered, therefore, whether the local online terminal of the local online player can perform real-time display after receiving the voice data can be determined according to the relative distance, namely whether the local online terminal is a voice receiving terminal around the voice sending terminal is judged, if yes, the voice data are output and displayed, and otherwise, the voice data are stopped being displayed. In addition, the specific display mode can be, but is not limited to, emitting sound through a voice loudspeaker.
Therefore, based on the terminal side processing manner described in the foregoing steps S201 to S202, the manner of receiving a voice player can be determined in a multi-player VR scene through the spatial position relationship between players, so that only the online terminals located around the voice sending terminal can display the voice data sent by the voice sending terminal, thereby ensuring the VR reality, effectively reducing the display of the voice data by the local online terminal, avoiding the problem that voice content cannot be distinguished due to simultaneous mixing of multiple parties, and greatly improving the experience of the multi-player VR scene of the player.
Example nine
In this embodiment, on the basis of the eighth embodiment, a technical solution of how to determine whether the local online terminal is a voice receiving terminal located around the voice sending terminal is specifically provided, that is, in step S202, it is determined whether the local online terminal is located around the voice sending terminal according to the following manner: and judging whether the local online terminal is positioned in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the local online terminal, if so, judging that the local online terminal is a first type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the first sector area is the corresponding current player visual angle of the voice sending terminal in a multi-player VR scene.
Similar to the second embodiment, in the above manner, the current player perspective corresponding to the voice sending terminal is the current perspective of the voice sending player in the multi-player VR scene, and the parameters of the perspective are transmitted to the server in real time by the voice sending terminal when experiencing the multi-player VR scene, and then transmitted to the local online terminal by the server through the internet. Therefore, if the local online terminal is located in the first sector area, it is indicated that the local online player is located in front of the voice-emitting player in the multi-player VR scene, and if the local online terminal is used as the voice receiving terminal of the voice data, the local online terminal can be greatly matched with the real scene, and the VR reality degree is improved.
Considering that in a real scene, other online players located behind the voice-emitting player can also hear the voice-emitting player, so to further improve VR reality, if the local online terminal is not located in the first sector, the method further includes: and judging whether the local online terminal is positioned in a second sector area which takes the current player position of the voice sending terminal as the center and takes a second distance as the radius according to the current player position of the local online terminal, if so, judging that the local online terminal is a second type of voice receiving terminal positioned around the voice sending terminal, wherein the second distance is smaller than the first distance, and the sector angle of the second sector area is the corresponding current non-player visual angle of the voice sending terminal in a multi-player VR scene. The current perspective of the voice sending terminal corresponding to the non-player is a perspective opposite to the current perspective of the voice sending player in the multi-player VR scene, and the parameters of the perspectives can be obtained based on the corresponding current player perspective of the voice sending terminal. Because the main propagation direction of a person is the front direction after the person makes a sound, the propagation distance towards the rear is obviously inferior to that towards the front, and therefore the second distance needs to be set to be smaller than the first distance, so that the truth of the virtual reality VR can be ensured. Therefore, if the local online terminal is located in the second sector area, it is indicated that the local online player is located behind the voice-emitting player in the multi-player VR scene, and if the local online terminal is used as the voice receiving terminal of the voice data, the local online terminal can also be greatly matched with the real scene, and the VR reality degree is further improved.
Considering that in a real scene, a sound propagation path behind a voice-emitting player includes complex ways such as diffraction, reflection, or diffusion, and further causes phenomena such as significant signal fading and noise enhancement, so to further improve the reality of the virtual reality VR, the method further includes: prior to presenting the voice data, performing attenuation processing on the voice data. The specific way of the attenuation processing is the existing way, and may include but not limited to the way of reducing the volume and/or inserting noise.
Example ten
In this embodiment, on the basis of the first embodiment, another technical solution is further specifically provided, which is different from the ninth embodiment and how to determine whether the local online terminal is a voice receiving terminal located around the voice sending terminal, that is, in the step S202, it is determined whether the local online terminal is located around the voice sending terminal according to the following manner: and judging whether the voice sending terminal is positioned in a third sector area which takes the current player position of the local online terminal as the center and takes a third distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a third type of voice receiving terminal positioned around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player visual angle of the local online terminal in a multi-player VR scene.
Similarly to the third embodiment, in the above manner, the corresponding current player perspective of the local online terminal is the current perspective of the online player in the multi-player VR scene, and the parameters of the perspective are locally generated. Although the above-mentioned technical solution that determines whether the local online terminal is located around the voice sending terminal based on the current player perspective corresponding to the local online terminal may result in that the VR reality is inferior to the ninth embodiment because of incomplete matching of the sound propagation principle, the local online terminal may be enabled to select a target to be listened to or highlight voice data of a target of interest thereof according to the receiver perspective, thereby improving the listening freedom of the local online player.
Similar to the ninth embodiment, if the voice-emitting terminal is not located in the third sector area, the method further includes: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the local online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a fourth type voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the local online terminal in a multi-player VR scene. The current non-player perspective corresponding to the local online terminal is a perspective opposite to the current perspective of the local online player in the multi-player VR scene, and the parameters of the perspective can also be obtained based on the current player perspective corresponding to the local online terminal. In addition, the voice data may also be attenuated prior to being presented.
EXAMPLE eleven
As shown in fig. 6, this embodiment provides a hardware apparatus for implementing the method for processing interactive voice data in a multi-person VR scene in any one of the eighth to tenth embodiments, including a receiving unit and a presenting unit that are communicatively connected; the receiving unit is used for receiving the voice data forwarded by the server and coming from the voice sending terminal; and the display unit is used for judging whether the local online terminal is a voice receiving terminal positioned around the voice sending terminal or not according to the local online terminal and the position of the corresponding current player of the voice sending terminal in a multi-player VR scene, outputting and displaying the voice data if the local online terminal is the voice receiving terminal, and stopping displaying the voice data if the local online terminal is not the voice receiving terminal.
The working process, working details and technical effects of the foregoing apparatus provided in this embodiment may refer to the method described in any one of the eighth to tenth embodiments, which are not described herein again.
Example twelve
As shown in fig. 7, the present embodiment provides a terminal device for performing the method for processing interactive voice data in a multi-person VR scenario according to any one of the eight to the tenth embodiments, wherein the terminal device includes a Memory, a processor and a transceiver, which are communicatively connected in sequence, wherein the Memory is used for storing a computer program, the transceiver is used for transceiving data, the processor is used for reading the computer program, and performing the method for processing interactive voice data in the multi-person VR scenario according to any one of the eight to the tenth embodiments, specifically for example, the Memory may include, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash Memory), a first-in first-out Memory (FIFO), and/or a first-in last-out Memory (FI L O), and the transceiver may include, but is not limited to, a WiFi (wireless fidelity) wireless transceiver, a bluetooth wireless transceiver, a GPRS (General Packet Radio Service) wireless transceiver, and/or a ZigBee Radio Service (ZigBee) wireless network, and/or a ZigBee (ZigBee) wireless network) transceiver may also employ, and/or a ZigBee (ZigBee) system, and/or a ZigBee (wireless transceiver) transceiver, and/or a ZigBee) may also employ, and/or a ZigBee (ZigBee) system, and/or a ZigBee) wireless transceiver, and/or a ZigBee (wireless transceiver, and/or a.
For the working process, the working details, and the technical effects of the terminal device provided in this embodiment, reference may be made to the method described in any one of the eighth to tenth embodiments, which is not described herein again.
EXAMPLE thirteen
This embodiment provides a computer-readable storage medium storing instructions including the method for processing interactive voice data in a multi-person VR scenario as described in any one of embodiments eight to ten, where the instructions are stored on the computer-readable storage medium and when the instructions are executed on a computer, perform the method for processing interactive voice data in a multi-person VR scenario as described in any one of embodiments eight to ten. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, the working details, and the technical effects of the foregoing computer-readable storage medium provided in this embodiment, reference may be made to the method described in any one of the eighth to tenth embodiments, which is not described herein again.
Example fourteen
The present embodiment provides a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the method for processing interactive voice data in a multi-person VR scenario as set forth in any one of embodiments eight to ten. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
The various embodiments described above are merely illustrative, and may or may not be physically separate, as they relate to elements illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A method for processing interactive voice data in a multi-person VR scenario, adapted to be executed on a server side, comprising:
receiving voice data from a voice sending terminal;
searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal;
and forwarding the voice data to the found voice receiving terminal.
2. The method of claim 1 for processing interactive voice data in a multi-person VR scenario, wherein for an online terminal that is not a voice transmitting terminal, whether it is located around the voice transmitting terminal is determined as follows:
judging whether the online terminal is located in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the online terminal, if so, judging that the online terminal is a first type of voice receiving terminal located around the voice sending terminal, wherein the sector angle of the first sector area is the view angle of the corresponding current player of the voice sending terminal in a multi-player VR scene;
or judging whether the voice sending terminal is located in a third sector area which takes the current player position of the online terminal as the center and takes a third distance as the radius according to the current player position of the voice sending terminal, if so, judging that the online terminal is a third type voice receiving terminal located around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player view angle of the online terminal in a multi-player VR scene.
3. The method of processing interactive voice data in a multi-person VR scene of claim 2, wherein:
if the online terminal is not located in the first sector area, the method further includes: judging whether the online terminal is located in a second sector area which takes the current player position of the voice sending terminal as the center and takes a second distance as the radius according to the current player position of the online terminal, if so, judging that the online terminal is a second voice receiving terminal located around the voice sending terminal, wherein the second distance is smaller than the first distance, and the sector angle of the second sector area is the corresponding current non-player visual angle of the voice sending terminal in a multi-player VR scene;
or, if the voice emitting terminal is not located in the third sector area, further comprising: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the online terminal is a fourth voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the online terminal in a multi-player VR scene.
4. The method of processing interactive voice data in a multi-person VR scene of claim 3, wherein: and aiming at the found second type voice receiving terminal or fourth type voice receiving terminal, before forwarding the voice data, carrying out attenuation processing on the voice data.
5. A device for processing interactive voice data in a multi-user VR scene comprises a voice data receiving unit, a receiving terminal searching unit and a voice data forwarding unit which are sequentially in communication connection;
the voice data receiving unit is used for receiving voice data from a voice sending terminal;
the receiving terminal searching unit is used for searching voice receiving terminals located around the voice sending terminal according to the corresponding current player positions of all online terminals in a multi-player VR scene, wherein all online terminals comprise the voice sending terminal;
and the voice data forwarding unit is used for forwarding the voice data to the found voice receiving terminal.
6. A computer device, characterized by: the system comprises a memory, a processor and a transceiver which are sequentially connected in a communication mode, wherein the memory is used for storing a computer program, the transceiver is used for transmitting and receiving data, and the processor is used for reading the computer program and executing the method according to any one of claims 1-4.
7. A computer-readable storage medium characterized by: the computer-readable storage medium having stored thereon instructions which, when executed on a computer, perform the method of any of claims 1-4.
8. A method for processing interactive voice data in a multi-person VR scene, which is suitable for being executed on a terminal side, includes:
receiving voice data forwarded by the server and coming from the voice sending terminal;
and judging whether the local online terminal is a voice receiving terminal positioned around the voice sending terminal or not according to the local online terminal and the position of the corresponding current player of the voice sending terminal in a multi-player VR scene, if so, outputting and displaying the voice data, and otherwise, stopping displaying the voice data.
9. The method of processing interactive voice data in a multi-person VR scenario of claim 8, wherein it is determined whether a local online terminal is located around the voice sending terminal as follows:
judging whether the local online terminal is located in a first sector area which takes the current player position of the voice sending terminal as the center and takes a first distance as the radius according to the current player position of the local online terminal, and if so, judging that the local online terminal is a first type of voice receiving terminal located around the voice sending terminal, wherein the sector angle of the first sector area is the corresponding current player visual angle of the voice sending terminal in a multi-player VR scene;
or judging whether the voice sending terminal is located in a third sector area which takes the current player position of the local online terminal as the center and takes a third distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a third type voice receiving terminal located around the voice sending terminal, wherein the sector angle of the third sector area is the corresponding current player view angle of the local online terminal in a multi-player VR scene.
10. The method of processing interactive voice data in a multi-person VR scene of claim 9, wherein:
if the local online terminal is not located in the first sector area, the method further includes: judging whether the local online terminal is located in a second sector area which takes the current player position of the voice sending terminal as the center and takes a second distance as the radius according to the current player position of the local online terminal, if so, judging that the local online terminal is a second type of voice receiving terminal located around the voice sending terminal, wherein the second distance is smaller than the first distance, and the sector angle of the second sector area is the corresponding current non-player viewing angle of the voice sending terminal in a multi-player VR scene;
or, if the voice emitting terminal is not located in the third sector area, further comprising: and judging whether the voice sending terminal is positioned in a fourth sector area which takes the current player position of the local online terminal as the center and a fourth distance as the radius according to the current player position of the voice sending terminal, if so, judging that the local online terminal is a fourth type voice receiving terminal positioned around the voice sending terminal, wherein the fourth distance is smaller than the third distance, and the sector angle of the fourth sector area is the corresponding current non-player visual angle of the local online terminal in a multi-player VR scene.
CN202010260564.4A 2020-04-03 2020-04-03 Method for processing interactive voice data in multi-person VR scene Pending CN111475022A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010260564.4A CN111475022A (en) 2020-04-03 2020-04-03 Method for processing interactive voice data in multi-person VR scene
PCT/CN2020/088827 WO2021196337A1 (en) 2020-04-03 2020-05-06 Method for processing interactive voice data in multi-person vr scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010260564.4A CN111475022A (en) 2020-04-03 2020-04-03 Method for processing interactive voice data in multi-person VR scene

Publications (1)

Publication Number Publication Date
CN111475022A true CN111475022A (en) 2020-07-31

Family

ID=71749820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010260564.4A Pending CN111475022A (en) 2020-04-03 2020-04-03 Method for processing interactive voice data in multi-person VR scene

Country Status (2)

Country Link
CN (1) CN111475022A (en)
WO (1) WO2021196337A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036388A (en) * 2020-11-06 2020-12-04 华东交通大学 Multi-user experience control method and device based on VR equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357295A (en) * 2015-10-30 2016-02-24 小米科技有限责任公司 Voice interaction method, device and system
CN106774830A (en) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 Virtual reality system, voice interactive method and device
US20180101990A1 (en) * 2016-10-07 2018-04-12 Htc Corporation System and method for providing simulated environment
CN109729109A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 Transmission method and device, storage medium, the electronic device of voice

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331977B (en) * 2016-08-22 2018-06-12 北京时代拓灵科技有限公司 A kind of virtual reality panorama acoustic processing method of network K songs
CN107103801B (en) * 2017-04-26 2020-09-18 北京大生在线科技有限公司 Remote three-dimensional scene interactive teaching system and control method
CN107066102A (en) * 2017-05-09 2017-08-18 北京奇艺世纪科技有限公司 Support the method and device of multiple VR users viewing simultaneously
CN107562201B (en) * 2017-09-08 2020-07-07 网易(杭州)网络有限公司 Directional interaction method and device, electronic equipment and storage medium
US10602302B1 (en) * 2019-02-06 2020-03-24 Philip Scott Lyren Displaying a location of binaural sound outside a field of view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357295A (en) * 2015-10-30 2016-02-24 小米科技有限责任公司 Voice interaction method, device and system
US20180101990A1 (en) * 2016-10-07 2018-04-12 Htc Corporation System and method for providing simulated environment
CN106774830A (en) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 Virtual reality system, voice interactive method and device
CN109729109A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 Transmission method and device, storage medium, the electronic device of voice

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036388A (en) * 2020-11-06 2020-12-04 华东交通大学 Multi-user experience control method and device based on VR equipment and readable storage medium

Also Published As

Publication number Publication date
WO2021196337A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
CN110730374B (en) Animation object display method and device, electronic equipment and storage medium
JP4854736B2 (en) Immersive audio communication
US10705790B2 (en) Application of geometric acoustics for immersive virtual reality (VR)
US9525958B2 (en) Multidimensional virtual learning system and method
CN108597530A (en) Sound reproducing method and device, storage medium and electronic device
JP2008547290A5 (en)
CN112337102B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN107301028B (en) Audio data processing method and device based on multi-person remote call
CN109104586A (en) Special efficacy adding method, device, video call device and storage medium
CN114191823B (en) Multi-view game live broadcast method and device and electronic equipment
CN108270750A (en) CDN switching methods, client and server
CN111475022A (en) Method for processing interactive voice data in multi-person VR scene
CN106685800A (en) Information prompting method and device
CN108369597A (en) Method, system and medium for indicating the group of viewers of video based on context
CN115705839A (en) Voice playing method and device, computer equipment and storage medium
CN105429981A (en) Game voice transmission method, terminal, voice service module and game system
US20220272478A1 (en) Virtual environment audio stream delivery
CN112162638B (en) Information processing method and server in Virtual Reality (VR) viewing
CN214409911U (en) Augmented reality AR experiences device
CN114979934A (en) Sound effect generation method and device, readable medium and electronic equipment
CN116407838A (en) Display control method, display control system and storage medium in game
CN115531878A (en) Game voice playing method and device, storage medium and electronic equipment
CN117201856A (en) Information interaction method, device, electronic equipment and storage medium
CN115767120A (en) Interaction method, device, electronic equipment and computer readable medium
CN117224954A (en) Game processing method, game processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200731

RJ01 Rejection of invention patent application after publication