CN109086029B - Audio playing method and VR equipment - Google Patents

Audio playing method and VR equipment Download PDF

Info

Publication number
CN109086029B
CN109086029B CN201810862521.6A CN201810862521A CN109086029B CN 109086029 B CN109086029 B CN 109086029B CN 201810862521 A CN201810862521 A CN 201810862521A CN 109086029 B CN109086029 B CN 109086029B
Authority
CN
China
Prior art keywords
user
virtual scene
loudspeaker
relative position
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810862521.6A
Other languages
Chinese (zh)
Other versions
CN109086029A (en
Inventor
赵献静
陈登基
黄安成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201810862521.6A priority Critical patent/CN109086029B/en
Publication of CN109086029A publication Critical patent/CN109086029A/en
Application granted granted Critical
Publication of CN109086029B publication Critical patent/CN109086029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)

Abstract

The embodiment of the invention provides an audio playing method and VR equipment, wherein the method is used for obtaining orientation information of a user in a virtual scene provided by the VR equipment; acquiring the position of a user in a virtual scene as a user position; calculating the relative position between the user and a preset pronunciation sound source in the virtual scene by using the position of the user and the position of the preset pronunciation sound source in the virtual scene; calculating an included angle between the user and a preset pronunciation sound source in the virtual scene by using the orientation information and the relative position; and controlling each loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle. By applying the scheme provided by the embodiment of the invention, the VR equipment can bring the experience of being personally on the scene for the user in the sense of hearing.

Description

Audio playing method and VR equipment
Technical Field
The invention relates to the technical field of virtual reality, in particular to an audio playing method and VR equipment.
Background
The VR device using VR (Virtual Reality) technology can generate a Virtual scene in a 3D (three-dimensional) space, provide a simulation of senses such as vision, hearing, and touch for a user, and enable the user to view things in the Virtual scene in the 3D space in time without limitation as if the user is personally on the scene. If the user moves in the process of using the VR device, the position of the user in the virtual scene changes accordingly, and in this case, the VR device performs complex calculation to show the 3D image corresponding to the current position of the user in the virtual scene to the user, so that the user has a presence. That is, an important feature of VR technology is to provide the user with an immersive experience that can give the user the impression of being in a virtual scene when using the VR device.
However, the existing VR devices focus on the visual experience and ignore the auditory experience, for example, the prior art provides a VR headset with a spatial audio function, and after the VR headset generates the spatial audio, the VR headset needs to play the spatial audio through an external earphone to enable the user to feel the effect of the spatial audio. Although the audio frequency that produces through external earphone's mode broadcast VR helmet can make the simple structure of VR helmet, nevertheless because the user need wear the earphone on the ear at the in-process that uses the VR helmet, like this earphone broadcast audio frequency back, audio frequency direct transmission is to the user in the ear, is difficult to bring real spatial audio effect for the user. Therefore, VR helmets wearing headphones are difficult to hear to give the user a feeling of being personally on the scene.
Disclosure of Invention
An object of the embodiments of the present invention is to provide an audio playing method and a VR device, so that the VR device can provide a user with an experience of being personally on the scene in the sense of hearing.
In order to achieve the above object, an embodiment of the present invention discloses an audio playing method applied to a VR device having a plurality of speakers, where each speaker is located at a different position of the VR device, the method including:
obtaining orientation information of a user in a virtual scene provided by the VR device;
acquiring the position of a user in the virtual scene as the user position;
calculating the relative position from the position of the preset pronunciation sound source in the virtual scene to the position of the user by utilizing the position of the user and the position of the preset pronunciation sound source in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position;
and controlling each loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle.
Further, the calculating an included angle between the user orientation and the preset pronunciation source in the virtual scene by using the orientation information and the relative position includes:
according to the orientation information, obtaining the orientation direction of the user in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the expression is:
Figure BDA0001750057210000021
a is the orientation direction of a user in the virtual scene; a is1,a2,a3Respectively the values of the x axis, the y axis and the z axis of the orientation direction of the user in a three-dimensional Cartesian coordinate system; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a three-dimensional Cartesian coordinate system; theta is an included angle.
Further, controlling each speaker to play audio generated by the VR device according to the relative position and the included angle includes:
determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment;
adjusting the volume of the target loudspeaker as a first volume based on the mapping relation between the relative position and the loudspeaker volume;
adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
and controlling the target loudspeaker to play the audio at the adjusted second volume.
Further, the speakers are evenly disposed at different locations of the VR device in a manner that surrounds a head of a user.
A VR device, the VR device comprising: the loudspeaker comprises a shell, a plurality of loudspeakers, a loudspeaker controller, a positioner and an information collector; the loudspeaker control device comprises a shell, loudspeakers, a positioner and an information collector, wherein the loudspeakers are positioned at different preset positions of the shell, and the loudspeaker control device, the positioner and the information collector are positioned in the shell;
the positioner acquires the position of a user in the virtual scene as the user position and sends the user position to the loudspeaker controller;
the information collector collects orientation information of a user in a virtual scene provided by the VR equipment and sends the collected information to the loudspeaker controller;
the loudspeaker controller receives the user position sent by the locator and the orientation information sent by the information collector, and calculates the relative position from the position of a preset pronunciation sound source in the virtual scene to the user position by using the user position and the position of the preset pronunciation sound source in the virtual scene; calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position; and controlling each loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle.
Further, the calculating an included angle between the user orientation and the preset pronunciation source in the virtual scene by using the orientation information and the relative position includes:
according to the orientation information, obtaining the orientation direction of the user in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the expression is:
Figure BDA0001750057210000031
a is the facing direction; a is1,a2,a3Respectively taking the values of the x axis, the y axis and the z axis of the orientation direction of the user in a Cartesian coordinate system taking the initial position of the user in the virtual scene as an origin; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a Cartesian coordinate system with an initial position of the user in a virtual scene as an origin; theta is an included angle.
Further, controlling each speaker to play audio generated by the VR device according to the relative position and the included angle includes:
determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment;
adjusting the volume of the target loudspeaker as a first volume based on the mapping relation between the relative position and the loudspeaker volume;
adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
and controlling the target loudspeaker to play the audio at the adjusted second volume.
Further, the speakers are evenly disposed at different locations of the VR device in a manner that surrounds a head of a user.
Further, the information collector is a gyroscope.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute any of the above-described audio playing methods.
In another aspect of the present invention, the present invention also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute any of the audio playing methods described above.
The embodiment of the invention provides an audio playing method and VR equipment, which are applied to VR equipment with a plurality of speakers, wherein the speakers are positioned at different positions of the VR equipment, the VR equipment acquires orientation information of a user in a virtual scene provided by the VR equipment, acquires the position of the user in the virtual scene as the user position, calculates the relative position from the position of a preset pronunciation sound source in the virtual scene to the user position by using the user position and the position of the preset pronunciation sound source in the virtual scene, calculates an included angle between the user and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position, and controls the speakers to play audio generated by the VR equipment according to the relative position and the included angle. Therefore, compared with the prior art that an external earphone is used for playing audio, the audio playing method and the audio playing device provided by the embodiment of the invention control the speakers at different positions of the VR device to play audio generated by the VR device by using the calculated relative position and included angle, so that a real spatial audio effect can be brought to a user, and the VR device can give a feeling of being personally on the scene to the user in hearing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of an audio playing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of speakers arranged in different directions for playing audio according to an embodiment of the present invention;
fig. 3 is a flowchart of a second audio playing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a VR device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a first audio playing method provided by an embodiment of the present invention, applied to a VR device having multiple speakers, where each speaker is located at a different preset position of the VR device, and the method includes the following steps:
s101, obtaining orientation information of a user in a virtual scene provided by the VR equipment;
wherein, VR equipment can be VR glasses and VR helmet etc..
In addition, the orientation information may be understood as the azimuth pointing information determined by the user's gaze in a cartesian coordinate system, the velocity and acceleration around each coordinate axis in the cartesian coordinate system when the user is in a virtual scene provided by the VR device, that is, the orientation information may include an orientation direction, which may be understood as a vector of the user's gaze in the cartesian coordinate system from the origin to the direction of the user's gaze.
S102, acquiring the position of the user in the virtual scene as the user position;
in a virtual scene provided by the VR device, if a scene change perceived by a user is made to coincide with a user action and the user does not feel a delay, the VR device needs to perceive the change in the user action in time. Based on the above situation, the position and the moving direction of the user in the virtual scene space need to be determined in real time, so that the moving mode of the user is consistent with the virtual scene perceived by the user vision, and the virtual scene perceived by the user in the vision is consistent with the virtual scene perceived in the hearing by the user based on the virtual scene perceived by the user and the audio played by the preset pronunciation sound source in the direction opposite to the user position perceived in the hearing by the user in the user position, so that the virtual scene perceived by the user in the vision is consistent with the virtual scene perceived in the hearing, and the reality of the virtual scene is enhanced.
It should be noted that S101 may be executed first and then S102 may be executed, or S102 may be executed first and then S101 may be executed; the execution sequence of S101 and S102 is not limited in the embodiment of the present invention.
S103, calculating the relative position from the preset pronunciation sound source position to the user position in the virtual scene by using the user position and the preset pronunciation sound source position in the virtual scene;
this step can be understood as: and when the preset sound source reaches the triggered condition, acquiring the position of the preset sound production intention source in the virtual scene.
Since the position of the preset pronunciation sound source in the virtual scene and the user position obtained in S102 can be regarded as a point, when the relative position from the preset pronunciation sound source position to the user position in the virtual scene is calculated, the solution can be performed according to the difference formula between the coordinates of the two points, it should be noted that the relative position is a vector, and the direction of the vector is the direction from the preset pronunciation sound source position to the user position.
The relative position from the preset pronunciation sound source position to the user position in the virtual scene can be rapidly and accurately calculated through the expression.
S104, calculating an included angle between the user and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position;
one implementation of implementing S104 may specifically include the following steps a-B:
step A, obtaining the orientation direction of a user in the virtual scene according to the orientation information;
based on the description of the orientation information in S101, the orientation direction belongs to one data information of the acquired orientation information, and the orientation direction is obtained by: and selecting a point from the visual line of the user under a Cartesian coordinate system by using the initial position origin of the user, and taking the relative position from the origin to the point, namely the coordinate difference as the orientation direction of the user in the virtual scene.
Step B, calculating an included angle between the user and the preset pronunciation source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the above expression is:
Figure BDA0001750057210000071
a is the orientation direction of the user in the virtual scene; a is1,a2,a3Respectively the values of the x axis, the y axis and the z axis of the orientation direction of the user in a three-dimensional Cartesian coordinate system; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a three-dimensional Cartesian coordinate system; theta is an included angle.
It should be noted that the dots of the three-dimensional cartesian coordinate system are initial positions of the user in the virtual scene.
Therefore, the realization mode can quickly and accurately acquire the value of the included angle through the orientation direction and the relative position.
And S105, controlling each loudspeaker to play the audio generated by the VR equipment according to the relative position and the included angle.
The mode of controlling each loudspeaker to play the audio generated by the VR equipment can be achieved by controlling the switch of each loudspeaker and adjusting the volume of each started loudspeaker to play the audio generated by the VR equipment, and can also be achieved by controlling the volume of each loudspeaker to play the audio generated by the VR equipment.
One specific embodiment of S105 includes the following steps C-F:
step C, determining a loudspeaker positioned in a preset range of the position of the preset pronunciation sound source as a target loudspeaker for playing the audio generated by the VR equipment;
in this step, after the position of the preset pronunciation source is determined, the position information of all speakers in the preset range of the preset pronunciation source position is inquired.
For example, a preset pronunciation source position is determined to be (2,1,2) in the virtual scene; taking the position of the preset pronunciation source as (2,1,2) as a central point, inquiring the position information of all the loudspeakers in the range within the radius of 1m, and if the position information of the loudspeakers in the range is inquired to be a loudspeaker A with (-2,1,2.3) and a loudspeaker B with (2,1, 2.2); these queried speaker a and speaker B are the target speakers within the preset range of 1m that need to be determined.
It should be noted that, in this step, it may be understood that the speakers located in the preset range of the preset sound source position are selected from the speakers, and the selection of the speakers may be to turn off the switches of the speakers not located in the preset range, or to turn off the volume of the speakers not located in the preset range to 0 degrees, which corresponds to the state where the speakers are turned off.
Step D, based on the mapping relation between the relative position and the loudspeaker volume, adjusting the volume of the target loudspeaker to be used as a first volume;
it should be noted that the sound volume of each speaker is set to 0 to M, where the value of M is a maximum value determined from practical experience, and the magnitude of the value of M in the virtual scene is mapped to the relative position, and the sound volume of the speaker is smaller for sound sources at farther relative positions, and conversely, the sound volume of the speaker is larger for sound sources at closer relative positions.
In this step, according to the mapping relationship between the relative position and the speaker volume, and according to the obtained relative position, the volume of the target speaker can be determined, and the volume of the target speaker is adjusted to the determined volume.
Step E, adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
it should be noted that the volume of each speaker is set to be 0 to M, where the value of M is a maximum value determined according to practical experience, the size of the value of M in a virtual scene is in a mapping relationship with the included angle, and the volume of each target speaker corresponding to the included angle is adjusted according to the value of the included angle.
In this step, according to the mapping relationship between the included angle and the speaker volume and the obtained included angle, the first volume corresponding to each target speaker can be adjusted to obtain the second volume of each target speaker.
And F, controlling the target loudspeaker to play the audio at the adjusted second volume.
In this step, the target speaker is controlled to play the audio according to the volume adjusted in step E, and the sound of the target speaker enables the user to feel that the preset sound source at the relative position to the user is sounding relative to the azimuth angle, i.e. the included angle, of the user, so that the user is as if he is personally on the scene.
It can be seen that, in the implementation manner, the target speaker is determined from the determined speakers through the included angle, the first volume of the determined target speaker is adjusted by using the mapping relationship between the relative position and the speaker, the first volume of the target speaker is adjusted by using the mapping relationship between the included angle and the speaker, and the target speaker is controlled to play the preset sound source at the adjusted second volume, so that the VR device can audibly bring the experience of the user in the presence.
For example, if the relative position from the preset pronunciation sound source position to the user position is set to 0.5m, 3 speakers within 1m of the preset pronunciation sound source are set to be the first speaker, the second speaker and the third speaker respectively, and these 3 speakers are used as target speakers, it can be considered that the speakers outside 1m of the preset pronunciation sound source are not heard by the user, if the relative position is 0.5m according to the mapping relationship between the relative position and the speaker volume, the speaker volume is 10 db, the volume of each target speaker is adjusted to be the first volume, that is, the first volume is 10 db, if the user station is facing right front at the relative position of 0.5m, that is, the included angle is 30 degrees, according to the mapping relationship between the included angle 30 degrees and each target speaker, that is, if the included angle is 30 degrees, the first speaker is 20 db, the second speaker is 30 db, the third speaker is 40 decibels; respectively and correspondingly adjusting the first volume of each target loudspeaker from 10 decibels to 20 decibels for the first loudspeaker, 30 decibels for the second loudspeaker and 40 decibels for the third loudspeaker; if the orientation direction of the user station when the relative position is 0.5m is the left front direction, namely the included angle is-30 degrees, according to the mapping relation between the-30 degrees and each target loudspeaker, if the included angle is-30 degrees, the first loudspeaker is 40 decibels, the second loudspeaker is 30 decibels, and the third loudspeaker is 20 decibels; the target speakers are adjusted to 40 db for the first speaker, 30 db for the second speaker, and 20 db for the third speaker.
In an implementation manner, each speaker provided in the embodiments of the present invention is located at a different preset position of the VR device: the speakers are uniformly arranged at different positions of the VR device in a manner of surrounding the head of a user. As shown in fig. 2, fig. 2 shows a schematic diagram of a user listening to audio played by various speakers on a VR device when the speakers are positioned at different locations of the VR device in a manner that surrounds the user's head.
In an ideal situation, in order to enable the speakers to simulate a real virtual scene, one speaker should be arranged at any angle on the spherical coordinate system, in an actual situation, the situation that the speakers are arranged at all the angles is impossible, but the speakers can be uniformly arranged according to a set interval angle based on the spherical coordinate system by taking the user position as a central point; the speakers are positioned at each set angular interval in a manner to surround the head of the user so that the speakers arranged around the head of the user can play a realistic sound source in the virtual scene.
Therefore, the audio playing method provided by the embodiment of the invention calculates the relative position by using the preset sound source position and the acquired user position, calculates the included angle by using the relative position and the orientation information, and controls each loudspeaker to play the audio generated by the VR equipment by using the calculated relative position and included angle.
Referring to fig. 3, a flowchart of a second audio playing method is provided in an embodiment of the present invention, and is applied to a VR device having multiple speakers, where each speaker is located at a different position of the VR device, and the method includes the following steps:
s201, obtaining orientation information of a user in a virtual scene provided by the VR equipment;
s202, acquiring the position of the user in the virtual scene as the user position;
s203, calculating the relative position from the preset pronunciation sound source position to the user position in the virtual scene by using the user position and the preset pronunciation sound source position in the virtual scene;
s204, calculating an included angle between the user and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position;
s201 to S204 are the same as the methods executed in steps S101 to S104 in the embodiment of fig. 1, respectively. Therefore, the implementation manners mentioned in the embodiment shown in fig. 1 are all adapted to the related steps related to the embodiment shown in fig. 3, and can achieve the same or similar beneficial effects, and are not described herein again.
S205, determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment;
s206, based on the mapping relation between the relative position and the loudspeaker volume, adjusting the volume of the target loudspeaker to be used as a first volume;
s207, adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
s208, controlling the target loudspeaker to play the audio with the adjusted second volume.
Therefore, in the method provided by the embodiment of the invention, the relative position is calculated by using the position of the preset pronunciation sound source and the acquired user position, the included angle is calculated by using the relative position and the orientation information, the target loudspeaker is determined by using the included angle, the volume of the determined target loudspeaker is adjusted to be the first volume by using the mapping relation between the relative position and the loudspeaker, the first volume of each target loudspeaker is adjusted to be the second volume by using the mapping relation between the included angle and the loudspeaker, and the target loudspeaker is controlled to play the preset pronunciation sound source at the determined second volume.
Referring to fig. 4, an embodiment of the present invention provides a schematic structural diagram of a VR device, where the VR device includes: a housing 301, a plurality of speakers 302, a locator 303, an information collector 304, and a speaker controller 305; each speaker is located at a different position of the housing 301, and the speaker controller 305, the locator 303 and the information collector 304 are all located in the housing 301;
the locator 303 acquiring a position of a user in the virtual scene as a user position, and sending the user position to the speaker controller 305;
the information collector 304 is configured to collect orientation information of a user in a virtual scene provided by the VR device, and send the collected information to the speaker controller 305;
the speaker controller 305 receives the user position sent by the locator 303 and the orientation information sent by the information collector 304, and calculates a relative position from the preset pronunciation source position to the user position in the virtual scene by using the user position and a preset pronunciation source position in the virtual scene; calculating an included angle between the user and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position; and controlling each loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle.
The information collector 304 may be a gyroscope, and collects position information of the user in the virtual scene by using the gyroscope.
The operating principle of VR equipment does: on one hand, the locator 303 obtains a user position in the virtual scene, and sends the user position to the speaker controller 305; on the other hand, the information collector 304 collects orientation information of a user in a virtual scene provided by the VR device, and sends the collected information to the speaker controller 305; the speaker controller 305 receives the user position sent by the locator 303 and the orientation information sent by the information collector 304, and calculates a relative position from the preset pronunciation source position to the user position in the virtual scene by using the user position and a preset pronunciation source position in the virtual scene; calculating an included angle between the user and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position; and controlling each loudspeaker to play the audio generated by the VR equipment according to the relative position and the included angle, so as to bring a real spatial audio effect to the user.
Therefore, the VR device provided by the embodiment of the invention calculates the relative position by using the preset pronunciation sound source position and the acquired user position, calculates the included angle by using the relative position and the orientation information, and controls each speaker to play the audio generated by the VR device by using the calculated relative position and the calculated included angle, so that a real spatial audio effect can be brought to the user, and the VR device can give an immersive experience to the user in an auditory sense.
In one implementation, the calculating an angle between the user and the preset pronunciation source in the virtual scene by using the orientation information and the relative position includes:
according to the orientation information, obtaining the orientation direction of the user in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the expression is:
Figure BDA0001750057210000121
a is the orientation direction of the user in the virtual scene; a is1,a2,a3Respectively the values of the x axis, the y axis and the z axis of the orientation direction of the user in a three-dimensional Cartesian coordinate system; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a three-dimensional Cartesian coordinate system; theta is an included angle.
In one implementation, the controlling the speakers to play the audio generated by the VR device according to the relative position and the included angle includes:
determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment;
adjusting the volume of the target loudspeaker as a first volume based on the mapping relation between the relative position and the loudspeaker volume;
adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
and controlling the target loudspeaker to play the audio at the adjusted second volume.
In one implementation, the speakers are uniformly positioned at different locations of the VR device around the user's head.
In one implementation, the information collector may be a gyroscope.
Based on the description of the VR device, the VR device may be a VR headset; use VR equipment now to exemplify as the VR helmet, above-mentioned shell is helmet shell, each speaker is evenly arranged in above-mentioned helmet shell, can be provided with a plurality of bleeder vents on the above-mentioned helmet shell, can breathe freely and can alleviate the weight of helmet again, for giving sound insulation, helmet shell's surface can be provided with the puigging, in order to prevent disturbing people around, in order to obtain better audio effect, the edge of above-mentioned helmet shell entry is provided with the atmospheric pressure pad, above-mentioned atmospheric pressure pad can be sealed with corresponding neck through aerifing. In order to make the user can immerse in the virtual scene that the VR equipment provided more, often the user can wear the VR helmet and sit on the VR seat, experience the sensation of being personally on the scene that vision, sense of hearing, sense of touch brought simultaneously, in order to slow down the impact force that the VR seat brought, the position that the internal surface of above-mentioned helmet shell corresponds user's top of the head can be provided with the blotter, and above-mentioned blotter can be the foam-rubber cushion.
Because the head of the user is immersed in the helmet for a long time, the temperature in the helmet is higher and higher, the user experience is not good, the service life of electronic parts in the helmet is easily and seriously influenced, and the problems can be solved according to the following three implementation modes based on the reasons.
In a first implementation manner, the VR helmet may further include a temperature controller, wherein the temperature controller is fixedly installed on an inner surface of the helmet shell; the temperature controller tests the temperature inside the shell in real time, and when the temperature reaches a preset threshold value, the refrigeration function of the temperature controller is started;
and stopping the refrigeration function of the temperature controller when the temperature is lower than the preset threshold value.
It is thus clear that above-mentioned implementation reduces the temperature in the helmet through set up the temperature controller in helmet shell, not only brings good user experience, has also prolonged the life of electron spare part in the VR helmet relatively moreover.
In a second implementation manner, the VR helmet may further include a temperature controller, wherein the temperature controller is connected to the system controller and is fixedly installed on an inner surface of the helmet shell;
the temperature controller tests the temperature inside the helmet shell in real time and sends the tested temperature to the system controller, and when the temperature obtained by the system controller reaches a preset threshold value, the refrigeration function of the temperature controller is started;
and when the temperature acquired by the system controller is lower than the preset threshold value, stopping the refrigeration function of the temperature controller.
Therefore, the temperature controller is controlled by the system controller to start and stop the refrigeration function in the implementation mode, so that the temperature in the helmet shell is reduced, good user experience is brought, and the service life of electronic parts in the VR helmet is relatively prolonged.
In a third implementation, the helmet further comprises a fan, wherein the fan is connected with the system controller and is fixedly installed on the inner surface of the outer shell;
when the system controller tests that the temperature in the shell reaches a preset threshold value, starting a fan to operate;
and when the system controller tests that the temperature in the shell is lower than a preset threshold value, stopping the fan from running.
It is thus clear that above-mentioned implementation mode opens and stops the fan rotation through system controller, and the setting up of fan is simple structure and replacement convenience both, can reach the temperature that reduces in the helmet shell simultaneously, not only brings good user experience, has also prolonged the life of electron spare part in the VR helmet relatively moreover.
In another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute any of the above-mentioned audio playing methods in the above-mentioned embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the above-mentioned audio playing methods of the above-mentioned embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device or the storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. An audio playing method applied to a VR device having a plurality of speakers, each speaker being located at a different location of the VR device, the method comprising:
obtaining orientation information of a user in a virtual scene provided by the VR device;
acquiring the position of a user in the virtual scene as the user position;
calculating the relative position from the position of the preset pronunciation sound source in the virtual scene to the position of the user by utilizing the position of the user and the position of the preset pronunciation sound source in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position;
determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment;
controlling a target loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle; so that the user feels that the preset pronunciation source relative to the relative position and the included angle is sounding.
2. The method of claim 1, wherein said calculating an angle between a user's orientation and said preset source of speech sounds in said virtual scene using said orientation information and said relative position comprises:
according to the orientation information, obtaining the orientation direction of the user in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the expression is:
Figure FDA0003254594830000011
a is the orientation direction of a user in the virtual scene; a is1,a2,a3Respectively the values of the x axis, the y axis and the z axis of the orientation direction of the user in a three-dimensional Cartesian coordinate system; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a three-dimensional Cartesian coordinate system; theta is an included angle.
3. The method of claim 1, wherein controlling a target speaker to play audio generated by the VR device based on the relative position and the included angle comprises:
adjusting the volume of the target loudspeaker as a first volume based on the mapping relation between the relative position and the loudspeaker volume;
adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
and controlling the target loudspeaker to play the audio at the adjusted second volume.
4. The method of any of claims 1-3, wherein the speakers are evenly positioned at different locations of the VR device around a user's head.
5. A VR device, comprising: the loudspeaker comprises a shell, a plurality of loudspeakers, a loudspeaker controller, a positioner and an information collector; the loudspeaker control device comprises a shell, loudspeakers, a positioner and an information collector, wherein the loudspeakers are positioned at different preset positions of the shell, and the loudspeaker control device, the positioner and the information collector are positioned in the shell;
the locator acquires the position of a user in a virtual scene as the user position and sends the user position to the loudspeaker controller;
the information collector collects orientation information of a user in a virtual scene provided by the VR equipment and sends the collected information to the loudspeaker controller;
the loudspeaker controller receives the user position sent by the locator and the orientation information sent by the information collector, and calculates the relative position from the position of a preset pronunciation sound source in the virtual scene to the user position by using the user position and the position of the preset pronunciation sound source in the virtual scene; calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene by using the orientation information and the relative position; determining a loudspeaker located in a preset range of the preset pronunciation sound source position as a target loudspeaker for playing the audio generated by the VR equipment; controlling a target loudspeaker to play audio generated by the VR equipment according to the relative position and the included angle; so that the user feels that the preset pronunciation source relative to the relative position and the included angle is sounding.
6. The apparatus of claim 5, wherein said calculating an angle between a user's orientation and the preset pronunciation source in the virtual scene using the orientation information and relative position comprises:
according to the orientation information, obtaining the orientation direction of the user in the virtual scene;
calculating an included angle between the user orientation and the preset pronunciation sound source in the virtual scene according to the following expression by using the orientation direction and the relative position;
the expression is:
Figure FDA0003254594830000031
a is the facing direction; a is1,a2,a3Respectively taking the values of the x axis, the y axis and the z axis of the orientation direction of the user in a Cartesian coordinate system taking the initial position of the user in the virtual scene as an origin; b is the relative position from the preset pronunciation sound source position to the user position; b1,b2,b3Respectively obtaining difference values of an x axis, a y axis and a z axis of a preset pronunciation sound source position and a user position under a Cartesian coordinate system with an initial position of the user in a virtual scene as an origin; theta is an included angle.
7. The device of claim 5, wherein the controlling a target speaker to play audio generated by the VR device based on the relative position and the angle comprises:
adjusting the volume of the target loudspeaker as a first volume based on the mapping relation between the relative position and the loudspeaker volume;
adjusting the first volume of the target loudspeaker as a second volume based on the mapping relation between the included angle and the loudspeaker volume;
and controlling the target loudspeaker to play the audio at the adjusted second volume.
8. The device of any of claims 5-7, wherein the speakers are evenly positioned at different locations of the VR device around a user's head.
9. The apparatus of claim 8, wherein the information collector is a gyroscope.
CN201810862521.6A 2018-08-01 2018-08-01 Audio playing method and VR equipment Active CN109086029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810862521.6A CN109086029B (en) 2018-08-01 2018-08-01 Audio playing method and VR equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810862521.6A CN109086029B (en) 2018-08-01 2018-08-01 Audio playing method and VR equipment

Publications (2)

Publication Number Publication Date
CN109086029A CN109086029A (en) 2018-12-25
CN109086029B true CN109086029B (en) 2021-10-26

Family

ID=64833261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810862521.6A Active CN109086029B (en) 2018-08-01 2018-08-01 Audio playing method and VR equipment

Country Status (1)

Country Link
CN (1) CN109086029B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control
CN110371051B (en) * 2019-07-22 2021-06-04 广州小鹏汽车科技有限公司 Prompt tone playing method and device for vehicle-mounted entertainment
CN112261337B (en) * 2020-09-29 2023-03-31 上海连尚网络科技有限公司 Method and equipment for playing voice information in multi-person voice
CN112492097B (en) * 2020-11-26 2022-01-11 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and computer readable storage medium
WO2022127747A1 (en) * 2020-12-14 2022-06-23 郑州大学综合设计研究院有限公司 Method and system for real social using virtual scene
CN112882568A (en) * 2021-01-27 2021-06-01 深圳市慧鲤科技有限公司 Audio playing method and device, electronic equipment and storage medium
CN114020235B (en) * 2021-09-29 2022-06-17 北京城市网邻信息技术有限公司 Audio processing method in live-action space, electronic terminal and storage medium
CN117956372A (en) * 2022-10-27 2024-04-30 安克创新科技股份有限公司 Audio processing method, audio playing device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105607735A (en) * 2015-12-17 2016-05-25 深圳Tcl数字技术有限公司 Output controlling system and method of multimedia equipment
KR101683385B1 (en) * 2015-06-26 2016-12-07 동서대학교산학협력단 360 VR 360 due diligence stereo recording and playback method applied to the VR experience space
CN107168518A (en) * 2017-04-05 2017-09-15 北京小鸟看看科技有限公司 A kind of synchronous method, device and head-mounted display for head-mounted display
CN107360494A (en) * 2017-08-03 2017-11-17 北京微视酷科技有限责任公司 A kind of 3D sound effect treatment methods, device, system and sound system
CN107367839A (en) * 2016-05-11 2017-11-21 宏达国际电子股份有限公司 Wearable electronic installation, virtual reality system and control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101683385B1 (en) * 2015-06-26 2016-12-07 동서대학교산학협력단 360 VR 360 due diligence stereo recording and playback method applied to the VR experience space
CN105607735A (en) * 2015-12-17 2016-05-25 深圳Tcl数字技术有限公司 Output controlling system and method of multimedia equipment
CN107367839A (en) * 2016-05-11 2017-11-21 宏达国际电子股份有限公司 Wearable electronic installation, virtual reality system and control method
CN107168518A (en) * 2017-04-05 2017-09-15 北京小鸟看看科技有限公司 A kind of synchronous method, device and head-mounted display for head-mounted display
CN107360494A (en) * 2017-08-03 2017-11-17 北京微视酷科技有限责任公司 A kind of 3D sound effect treatment methods, device, system and sound system

Also Published As

Publication number Publication date
CN109086029A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109086029B (en) Audio playing method and VR equipment
US8160265B2 (en) Method and apparatus for enhancing the generation of three-dimensional sound in headphone devices
CN107168518B (en) Synchronization method and device for head-mounted display and head-mounted display
EP3687190B1 (en) Mapping virtual sound sources to physical speakers in extended reality applications
CN108781341B (en) Sound processing method and sound processing device
KR101673232B1 (en) Apparatus and method for producing vertical direction virtual channel
US10757528B1 (en) Methods and systems for simulating spatially-varying acoustics of an extended reality world
US11356795B2 (en) Spatialized audio relative to a peripheral device
WO2021169689A1 (en) Sound effect optimization method and apparatus, electronic device, and storage medium
JP6111611B2 (en) Audio amplifier
TW201928945A (en) Audio scene processing
US10667073B1 (en) Audio navigation to a point of interest
CN113365191A (en) Music playing method, device, equipment and storage medium
US11102604B2 (en) Apparatus, method, computer program or system for use in rendering audio
CN112612444A (en) Sound source position positioning method, sound source position positioning device, electronic equipment and storage medium
US20210343296A1 (en) Apparatus, Methods and Computer Programs for Controlling Band Limited Audio Objects
JP6756777B2 (en) Information processing device and sound generation method
US20220171593A1 (en) An apparatus, method, computer program or system for indicating audibility of audio content rendered in a virtual space
CN113766397A (en) Sound positioning control method of stereo earphone, stereo earphone and related equipment
JP2017079457A (en) Portable information terminal, information processing apparatus, and program
CN106057207B (en) Remote stereo omnibearing real-time transmission and playing method
WO2023226161A1 (en) Sound source position determination method, device, and storage medium
WO2024134736A1 (en) Head-mounted display device and stereophonic sound control method
WO2023112144A1 (en) Vibration sensing position control apparatus, vibration sensing position control method, and vibration sensing position control program
TW202320556A (en) Audio adjustment based on user electrical signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant