CN107632704B - Mixed reality audio control method based on optical positioning and service equipment - Google Patents

Mixed reality audio control method based on optical positioning and service equipment Download PDF

Info

Publication number
CN107632704B
CN107632704B CN201710781258.3A CN201710781258A CN107632704B CN 107632704 B CN107632704 B CN 107632704B CN 201710781258 A CN201710781258 A CN 201710781258A CN 107632704 B CN107632704 B CN 107632704B
Authority
CN
China
Prior art keywords
virtual
calibration object
sound source
determining
virtual sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710781258.3A
Other languages
Chinese (zh)
Other versions
CN107632704A (en
Inventor
沈时进
盛中华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leafun Culture Science and Technology Co Ltd
Original Assignee
Guangzhou Leafun Culture Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leafun Culture Science and Technology Co Ltd filed Critical Guangzhou Leafun Culture Science and Technology Co Ltd
Priority to CN201710781258.3A priority Critical patent/CN107632704B/en
Publication of CN107632704A publication Critical patent/CN107632704A/en
Application granted granted Critical
Publication of CN107632704B publication Critical patent/CN107632704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A mixed reality audio control method and service equipment based on optical positioning are disclosed, the method comprises: acquiring a plurality of shot images which are shot by a plurality of high-speed cameras and contain a calibration object; determining the physical position of the calibration object in the real space according to the plurality of shot images; obtaining a virtual location in a virtual space to which the physical location maps; calculating output time delay between each sound channel of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object; and respectively outputting the audio signals corresponding to each sound channel obtained by filtering the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the sound channels according to the output time delay between the sound channels to the audio playing equipment, so that the matching degree between the audio signals corresponding to the virtual sound source and the user position can be improved.

Description

Mixed reality audio control method based on optical positioning and service equipment
Technical Field
The invention relates to the technical field of audio control, in particular to a mixed reality audio control method and service equipment based on optical positioning.
Background
Mixed Reality technology (MR) is a further development of virtual Reality technology, and the Mixed Reality technology pursues immersion to blur the boundary between a real space and a virtual space and enhance the Reality of user experience. Most of the current mixed reality technologies visually improve the immersion, but the auditory sense is also an important factor for improving the immersion. When a user moves, the sound emitted by the same sound source should be changed, however, most of the audio control schemes used in the current mixed reality technology cannot change the sound emitted by the virtual sound source with the change of the user position, and cannot enable the user to perceive the sound source position and the sound source distance corresponding to the current user position. Under the condition that the real sound source and the virtual sound source which objectively exist sound simultaneously, the audio control scheme can enable a user to generate strong splitting feeling and influence the user experience.
Disclosure of Invention
The embodiment of the invention discloses a mixed reality audio control method and service equipment based on optical positioning, which can improve the matching degree between an audio signal corresponding to a virtual sound source and a user position.
The embodiment of the invention discloses a mixed reality audio control method based on optical positioning in a first aspect, which comprises the following steps:
acquiring a plurality of shot images which are shot by a plurality of high-speed cameras and contain calibration objects, wherein the number of the shot images is at least three, and the calibration objects are used for calibrating the heads of MR head display wearers;
determining the physical position of the calibration object in the real space according to the plurality of shot images;
mapping the physical position to a virtual space to obtain a virtual position of the calibration object in the virtual space;
calculating output time delay between each sound channel of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object;
and respectively filtering the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the sound channels to obtain the audio signals corresponding to each sound channel in each sound channel, and respectively outputting the audio signals corresponding to each sound channel to the audio playing equipment according to the output time delay between the sound channels.
As an optional implementation manner, in the first aspect of the embodiments of the present invention, the calculating an output delay between channels of an audio playback device worn by an MR headset wearer according to a known position of any virtual sound source in a virtual space and the virtual position of the calibration object includes:
determining the relative distance between any virtual sound source and the calibration object according to the known position of the virtual sound source in the virtual space and the virtual position of the calibration object;
controlling the MR head display to acquire scenes in a visual field range by using a binocular camera simulating the work of human eyes;
judging whether a positioning feature point exists in the scene or not, and if so, determining the visual angle direction of the MR head display wearer according to the known position of the positioning feature point in the real space and the physical position of the calibration object;
determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and calculating the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative direction.
As an alternative implementation, in the first aspect of the embodiment of the present invention, the determining the physical position of the calibration object in the real space according to the multiple captured images includes:
calculating the relative position between the calibration object and a high-speed camera shooting the shot image according to each shot image in the shot images;
and calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
if the positioning feature points do not exist in the scene, acquiring the angular rate of the MR head when the MR head rotates;
and determining the visual angle direction of the MR head display wearer according to the angular rate, and executing the step of determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, after the obtaining the virtual position of the calibration object in the virtual space, the method further includes:
judging whether the virtual position is in a designated area, wherein the designated area takes the virtual sound source as a center and takes a preset minimum distance as a radius;
if yes, directly outputting the audio signal corresponding to the virtual sound source to the audio playing equipment;
if not, the known position of any virtual sound source in the virtual space and the virtual position of the calibration object are executed, and the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer is calculated.
A second aspect of an embodiment of the present invention discloses a service device, including:
the device comprises an acquisition unit, a calibration unit and a control unit, wherein the acquisition unit is used for acquiring a plurality of shot images which are shot by a plurality of high-speed cameras and contain calibration objects, the number of the shot images is at least three, and the calibration objects are used for calibrating the head of an MR head display wearer;
the position determining unit is used for determining the physical position of the calibration object in the real space according to the plurality of shot images;
the mapping unit is used for mapping the physical position to a virtual space to obtain the virtual position of the calibration object in the virtual space;
the processing unit is used for calculating output time delay among sound channels of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
and the output unit is used for filtering the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the channels to obtain the audio signals corresponding to each channel in each channel, and respectively outputting the audio signals corresponding to each channel to the audio playing equipment according to the output delay between the channels.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the processing unit includes:
the distance determining module is used for determining the relative distance between the virtual sound source and the calibration object according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
the control module is used for controlling the MR head display to acquire scenes in a visual field range by using a binocular camera for simulating the work of human eyes;
the judging module is used for judging whether the scene has the positioning feature points or not;
the visual angle determining module is used for determining the visual angle direction of the MR head display wearer according to the known position of the positioning characteristic point in the real space and the physical position of the calibration object when the judging module judges that the positioning characteristic point exists in the scene;
the direction determining module is used for determining the relative direction between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and the processing module is used for calculating the output time delay between each sound channel of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative direction.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the position determining unit includes:
the first calculation module is used for calculating the relative position between the calibration object and a high-speed camera for shooting the shot image according to each shot image in the shot images;
and the second calculation module is used for calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the processing unit further includes:
the acquisition module is used for acquiring the angular rate of the MR head when the MR head rotates when the judgment module judges that the positioning feature points do not exist in the scene;
and the visual angle determining module is further used for determining the visual angle direction of the MR head display wearer according to the angular rate, and triggering the orientation determining module to execute the step of determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the method further includes:
a judging unit, configured to judge whether the virtual position is within a specified area, where the specified area takes the virtual sound source as a center and a preset minimum distance as a radius;
the output unit is further configured to directly output the audio signal corresponding to the virtual sound source to the audio playing device when the judging unit judges that the virtual position is within the designated area;
the processing unit is specifically configured to, when the determining unit determines that the virtual position is not within the specified area, calculate an output delay between channels of an audio playing device worn by an MR head display wearer according to a known position of any virtual sound source in a virtual space and the virtual position of the calibration object.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the service equipment can calculate the physical position of the calibration object in a real space according to a plurality of shot images which are shot by a plurality of high-speed cameras and contain the head calibration object of the MR head display wearer, and then can determine the virtual position of the calibration object in a virtual space according to the mapping relation between the real space and the virtual space, so that the output time delay between each sound channel can be calculated according to the known position of any virtual sound source and the virtual position of the calibration object, the audio signals corresponding to the virtual sound sources are filtered by using the digital filters corresponding to the sound channels, the audio signals corresponding to the sound channels are obtained, and the audio signals corresponding to the sound channels are respectively output to the audio playing equipment according to the output time delay between the sound channels. Therefore, in the embodiment of the present invention, the service device may monitor the position of the user in real time, process the audio signal corresponding to the virtual sound source in real time according to the relationship between the position of the user and the position of the virtual sound source, adjust the time difference between the audio signals heard by the ears of the user in real time, and simulate the propagation process of sound emitted from the sound source to the ears of the user in the real space by using the digital filter, so that the audio signal corresponding to the virtual sound source changes along with the change of the position of the user, thereby improving the matching degree between the audio signal corresponding to the virtual sound source and the position of the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a mixed reality audio control method based on optical positioning according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another mixed reality audio control method based on optical positioning according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a service device disclosed in the embodiment of the present invention;
fig. 4 is a schematic structural diagram of another service device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a mixed reality audio control method and service equipment based on optical positioning, which can improve the matching degree between an audio signal corresponding to a virtual sound source and a user position. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a mixed reality audio control method based on optical positioning according to an embodiment of the present invention. The mixed reality method based on optical positioning described in fig. 1 is suitable for a service device connected to an MR head, and the embodiment of the present invention is not limited thereto. For example, the service device connected to the MR head display may be a personal computer, a smart phone, a cloud server, and the like, and the embodiment of the present invention is not limited thereto. The operating system of the service device connected to the MR head display may include, but is not limited to, a Windows operating system, a Linux operating system, an Android operating system, an IOS operating system, and the like. As shown in fig. 1, the mixed reality audio control method based on optical positioning may include the steps of:
101. the service device acquires a plurality of shot images including the calibration object shot by the plurality of high-speed cameras.
In an embodiment of the invention, the number of the plurality of captured images is at least three, and the calibration object is used for calibrating the head of the MR head display wearer. The service equipment can perform optical positioning on the calibration object by acquiring a shot image shot by the high-speed camera on the calibration object. Optionally, an active optical positioning mode in which the calibration object actively emits light may be used, for example, the calibration object may be an infrared LED lamp located on the head of the user, and the service device may distinguish different infrared LED lamps by controlling different infrared LED lamps to emit infrared light at different frequencies, so as to distinguish the head of the user from other parts of the body, or distinguish multiple users. Or, the calibration object may also emit visible light, and the service device may distinguish different calibration objects by visible light of different colors, which is not limited in the embodiment of the present invention. In addition, a passive optical positioning mode in which the calibration object does not emit light may also be used, for example, the calibration object located on the head of the user may be a pattern such as a pattern, a two-dimensional code, and the like, and the service device may distinguish different calibration objects by using an image recognition method, which is not limited in the embodiment of the present invention.
102. The service equipment determines the physical position of the calibration object in the real space according to the plurality of shot images.
As an optional implementation manner, in the embodiment of the present invention, the manner in which the service device determines the physical position of the calibration object in the real space according to the multiple captured images may specifically be:
calculating a relative position between the calibration object and a high-speed camera for capturing the captured image according to each of the plurality of captured images;
and calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
In the embodiment of the invention, when the head of the user rotates or displaces, the absolute position of the virtual sound source does not change, but the relative position of the virtual sound source and the head of the user changes. For example, when a guitar is played in front of the user and the user moves to the right of the guitar, the user hears the guitar sound from the left, so that the user's head position needs to be tracked in order to enhance the reality of the user's experience in the virtual space, and the above steps 101 and 102 are performed, the rotation and displacement of the target (i.e. the user's head) can be tracked by using the optical positioning method, so as to provide the position information for the subsequent audio processing.
103. The service equipment maps the physical position of the calibration object to the virtual space to obtain the virtual position of the calibration object in the virtual space.
In the embodiment of the present invention, the service device may generate a virtual space in advance, and a mapping relationship exists between the virtual space and the physical space, so that an infinite virtual space may be mapped in a limited physical space, and a relationship between a virtual sound source existing in the virtual space and a calibration object (i.e., a user head) is relative to the virtual space, so that after the physical position of the calibration object is determined, the service device needs to convert the physical position of the calibration object into a virtual position according to the mapping relationship between the virtual space and the physical space.
104. The service equipment calculates the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object.
As an optional implementation manner, in the embodiment of the present invention, a manner of the service device executing step 104 may specifically be:
determining the relative distance between a virtual sound source and a calibration object according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
controlling an MR head display to acquire scenes in a visual field range by using a binocular camera simulating the work of human eyes;
judging whether a positioning feature point exists in the scene or not, and if so, determining the visual angle direction of the MR head display wearer according to the known position of the positioning feature point in the real space and the physical position of the calibration object;
determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and calculating the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative direction.
In the embodiment of the invention, the sound transmitted to the left ear and the right ear has a certain difference in time. For example, assuming that the sound source is on the left side of the user, the sound emitted from the sound source will reach the left ear first and then reach the right ear, the relative distance and relative orientation between the virtual sound source and the head of the user are important factors influencing the sound delay, and the channels of the audio playing device worn by the MR headset wearer are used to simulate the sound transmitted from various directions to the ears of the user, so the output delay between the channels of the audio playing device worn by the MR headset wearer needs to be calculated.
105. The service equipment utilizes the digital filters corresponding to the sound channels to respectively filter the audio signals corresponding to the virtual sound sources to obtain the audio signals corresponding to each sound channel in each sound channel, and respectively outputs the audio signals corresponding to each sound channel to the audio playing equipment according to the output time delay between the sound channels.
In the embodiment of the present invention, sounds are different not only in the time of reaching two ears, but also in the frequency and level of the sounds, for example, assuming that the sound source is on the left side of the user, the level of the sounds heard by the left ear of the user is higher than the level of the sounds heard by the right ear, and in the process of transmitting the sounds to the right ear of the user, the high frequency part is shielded by the head, and the low frequency part is diffracted into the ears. In combination with the time difference, the sound level difference and the sound frequency difference of sound, the sound source can be positioned by using the sound difference heard by ears in real life. When a user experiences a mixed reality technology, the experience of sound in real life is restored in a virtual space, and sound frequency, sound level and time delay are required to be respectively processed on sound signals transmitted to the left ear and the right ear of the user, so that the user can position a virtual sound source in the virtual space.
The digital filter can simulate the influence of various factors on the sound level and the frequency of sound in the process of transmitting the sound from the sound source to the ears of a person, so that the digital filter corresponding to each sound channel can be used for processing the audio signal corresponding to the virtual sound source to obtain the audio signal corresponding to each sound channel, and the audio signal corresponding to each sound channel can simulate the sound transmitted from each direction in a real space. The digital filter may be fixed, and the filter corresponding to each channel may not be changed once set, or may be adjustable, and parameters in the digital filter may be adjusted according to different situations.
Alternatively, the service apparatus may adjust parameters corresponding to the distance and the orientation in the digital filter after determining the relative distance and the relative orientation between the sound source and the calibration object (i.e., the user's head), respectively, so that an audio signal matching the relative distance and the relative orientation can be obtained.
Further optionally, the service device may further adjust a parameter corresponding to the environment in the digital filter according to the virtual position, for example, when the virtual position is determined to be in an open outdoor environment in the virtual space, the service device may adjust the parameter corresponding to the environment in the digital filter to match the outdoor environment, and when the virtual position is determined to be in a closed indoor environment in the virtual space, the service device may adjust the parameter corresponding to the environment in the digital filter to match the indoor environment, so that an influence of an environmental factor on a sound propagation process may also be considered in a process of processing the audio signal, so that the sound propagation process simulated by the digital filter is closer to a sound propagation process in the real space, and a sense of reality experienced by a user is enhanced.
In step 105, the sound frequency and the sound level can be processed by using the digital filter, and the delayed output can be realized by using the digital delay circuit, so that the sound played by the audio playing device is more vivid when a user hears, and the virtual sound source can be positioned in the virtual space. Generally, the sound heard by the left ear and the right ear can be simulated by using the two-channel output audio signal, and multiple channels can also be used for better realizing stereo sound, and the embodiment of the invention is not limited.
It can be seen that, in the mixed reality audio control method based on optical positioning described in fig. 1, the service device may calculate the physical position of the calibration object in the real space according to a plurality of shot images containing the head calibration object of the MR head display wearer shot by a plurality of high-speed cameras, so as to track the rotation and displacement of the head of the user, monitor the position of the user in real time, and provide position information for subsequent audio signal processing. After the physical position of the calibration object is obtained, the service equipment can obtain the virtual position of the calibration object in the virtual space according to the mapping relation between the real space and the virtual space, so that the output time delay between each sound channel can be calculated according to the known position of any virtual sound source and the virtual position of the calibration object, the digital filters corresponding to each sound channel are utilized to respectively filter the audio signals corresponding to the virtual sound source to obtain the audio signals corresponding to each sound channel, the audio signals corresponding to each sound channel are respectively output to the audio playing equipment according to the output time delay between each sound channel, so that the time difference of the audio signals heard by two ears of a user can be adjusted in real time, and the digital filters are used for simulating the transmission process of sound in the real space from the sound source to the ears of the user, so that the audio signals corresponding to the virtual sound source are changed along with the change of the position of the user, the matching degree between the audio signal corresponding to the virtual sound source and the user position can be improved.
Example two
Referring to fig. 2, fig. 2 is a schematic diagram illustrating another mixed reality audio control method based on optical positioning according to an embodiment of the present invention. As shown in fig. 2, the mixed reality audio control method based on optical positioning may include the steps of:
201. the service device acquires a plurality of shot images including the calibration object shot by the plurality of high-speed cameras.
In an embodiment of the invention, the number of the plurality of captured images is at least three, and the calibration object is used for calibrating the head of the MR head display wearer.
202. The service equipment determines the physical position of the calibration object in the real space according to the plurality of shot images.
As an optional implementation manner, in the embodiment of the present invention, the manner in which the service device determines the physical position of the calibration object in the real space according to the multiple captured images may specifically be:
calculating a relative position between the calibration object and a high-speed camera for capturing the captured image according to each of the plurality of captured images;
and calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
203. The service equipment maps the physical position of the calibration object to the virtual space to obtain the virtual position of the calibration object in the virtual space.
204. The service equipment judges whether the virtual position of the calibration object is in the designated area, if so, step 205 is executed, and if not, step 206 to step 207 are executed.
In an embodiment of the present invention, the designated area takes the virtual sound source as a center, and takes a preset minimum distance as a radius.
205. The service equipment directly outputs the audio signal corresponding to the virtual sound source to the audio playing equipment, and the process is ended.
In the embodiment of the invention, when the head of the user is closer to any real sound source in the real space, the sound difference reaching two ears is very small, the user is difficult to distinguish, correspondingly, in the virtual space, when the virtual position of the calibration object (namely the head of the user) is in the specified area taking any virtual sound source as the center, the service equipment can directly output the audio signal corresponding to the virtual sound source to the audio playing equipment without any processing, the user experience is not influenced, meanwhile, the calculation amount of the service equipment can be reduced, calculation resources are saved for other operations such as position tracking and the like, and the response speed of the service equipment is improved.
206. The service equipment calculates the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object.
As an optional implementation manner, in the embodiment of the present invention, the manner in which the service device executes step 206 may specifically be:
determining the relative distance between a virtual sound source and a calibration object according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
controlling an MR head display to acquire scenes in a visual field range by using a binocular camera simulating the work of human eyes;
judging whether the scene has the positioning feature points or not, and if so, determining the visual angle direction of the MR head display wearer according to the known positions of the positioning feature points in the real space and the physical position of the calibration object;
determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and calculating the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative orientation.
As another optional implementation manner, in the embodiment of the present invention, a manner in which the service device executes step 206 may further be that:
if the positioning characteristic points do not exist in the scene, acquiring the angular rate of the MR head when the MR head rotates;
according to the angular rate, the viewing angle direction of the MR headset wearer is determined, and the step of determining the relative orientation between the calibration object and the virtual sound source according to the viewing angle direction and the known position of the virtual sound source in the virtual space is performed.
In the embodiment of the present invention, the scene may include a background scene and a solid scene, and the positioning feature point is on the background scene, so that due to the influence of factors such as occlusion of the solid scene, the scene acquired by the binocular camera of the MR head display may not include the positioning feature point, that is, the binocular camera of the MR head display may not capture the positioning feature point, and at this time, the service device may not use the position information of the positioning feature point to determine the viewing angle direction of the wearer (that is, the user) of the MR head display. However, the MR head display may have an inertial measurement unit such as a gyroscope or an accelerometer built therein, and the gyroscope may measure the angular rate of the MR head display when it rotates. The service device may use the view direction of the MR head monitor wearer (i.e., the user) as the starting direction before the MR head monitor fails to capture the positioning feature point, and calculate the view direction of the MR head monitor wearer (i.e., the user) when the scene without the positioning feature point is acquired by combining the above angular rate. Therefore, the angular rate measured by the inertial measurement unit is used as supplement, and the accuracy of the service equipment for determining the direction of the view angle when the MR head does not shoot the positioning characteristic point can be improved.
Optionally, in this embodiment of the present invention, after determining the relative distance between the virtual sound source and the calibration object, the service device may further:
setting the visual angle direction of the MR head display wearer as the direction in which the user directly faces the virtual sound source;
controlling an MR head to display a binocular camera simulating the work of human eyes to acquire a scene in a visual field range;
correcting the view direction according to the positioning characteristic points in the scene;
and determining the relative orientation between the calibration object and the virtual sound source according to the corrected visual angle direction and the known position of the virtual sound source in the virtual space.
In the embodiment of the invention, no matter which direction the user's initial viewing angle direction is located, when the virtual sound source starts to sound, the next action of the user always tends to face the virtual sound source. For example, the virtual sound source may be a Non-Player controlled Character (NPC), and after the NPC starts talking with the user, the next action of the user is likely to be to turn to the NPC and move toward the NPC, while during the talking, the user and the NPC always maintain a face-to-face relationship, so that the service device can directly set the visual direction of the MR headset wearer (i.e., the user) to a direction directly facing the virtual sound source after the virtual sound source starts speaking, and use the positioning feature point in the background scene as a supplementary correction, thereby reducing the amount of calculation of the service device while maintaining the accuracy of determining the viewing angle direction.
207. The service equipment utilizes the digital filters corresponding to the sound channels to respectively filter the audio signals corresponding to the virtual sound sources to obtain the audio signals corresponding to each sound channel in each sound channel, and respectively outputs the audio signals corresponding to each sound channel to the audio playing equipment according to the output time delay between the sound channels.
In the method described in fig. 2, the service device may track the head of the user, monitor the position of the user in real time, and process the audio signal corresponding to the virtual sound source in real time according to the position information, thereby improving the matching degree between the audio signal corresponding to the virtual sound source and the position of the user. Further, in the method described in fig. 2, when the service device determines that the head of the user is in the designated area, the service device directly outputs the audio signal corresponding to the virtual sound source to the audio playing device, so that the amount of computation of the service device can be reduced without affecting user experience, computing resources are made for other operations such as position tracking, and the response speed of the service device is increased. In addition, in the method described in fig. 2, when the serving device does not include the positioning feature point in the scene acquired by the MR head display, the angular rate measured by the inertial measurement unit of the MR head display is used to calculate the viewing angle direction of the wearer of the MR head display device, so that the angular rate can be used as a supplement to improve the accuracy of determining the viewing angle direction by the serving device when the positioning feature point is not captured by the MR head display.
EXAMPLE III
Referring to fig. 3, fig. 3 is a service device according to an embodiment of the present invention. As shown in fig. 3, the service apparatus includes:
an acquisition unit 301 configured to acquire a plurality of captured images including a calibration object captured by a plurality of high-speed cameras.
In the embodiment of the invention, the number of the plurality of shot images is at least three, and the calibration object is used for calibrating the head of the MR head display wearer;
a position determining unit 302 configured to determine a physical position of the calibration object in the real space from the plurality of captured images acquired by the acquiring unit 301;
a mapping unit 303, configured to map the physical location determined by the location determining unit 302 to a virtual space, so as to obtain a virtual location of the calibration object in the virtual space;
a processing unit 304, configured to calculate output time delays between channels of an audio playing device worn by the MR headset wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object determined by the mapping unit 303.
An output unit 305, configured to filter, by using the digital filters corresponding to the channels, the audio signals corresponding to the virtual sound sources respectively to obtain audio signals corresponding to each channel in each channel, and output, according to the output delay between the channels calculated by the processing unit 304, the audio signals corresponding to each channel to an audio playing device respectively.
The processing unit 304 includes:
a distance determining module 3041, configured to determine a relative distance between a virtual sound source and a calibration object according to a known position of any virtual sound source in a virtual space and the virtual position of the calibration object determined by the mapping unit 303;
the control module 3042 is used for controlling the MR head to acquire the scene in the visual field range by using a binocular camera which simulates the work of human eyes;
a determining module 3043, configured to determine whether the control module 3042 controls the MR head to display the acquired scene and determine whether there is a positioning feature point;
a view angle determining module 3044, configured to determine, when the determining module 3043 determines that the positioning feature point exists in the scene, a view angle direction of the MR head display wearer according to a known position of the positioning feature point in the real space and the physical position of the calibration object determined by the determining unit 302;
an orientation determining module 3045, configured to determine a relative orientation between the calibration object and the virtual sound source according to the viewing angle direction determined by the viewing angle determining module 3044 and the known position of the virtual sound source in the virtual space;
a processing module 3046, configured to calculate an output delay between each sound channel of the audio playing device worn by the MR head display wearer according to the relative distance determined by the distance determining module 3041 and the relative orientation determined by the orientation determining module 3045;
the output unit 305 is specifically configured to filter the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the channels, to obtain the audio signals corresponding to each channel in each channel, and output the audio signals corresponding to each channel to the audio playing device according to the output delay between the channels calculated by the processing module 3046.
In the embodiment of the present invention, the digital filter of the output unit 305 may simulate the influence of various factors on the sound level and frequency of sound in the transmission process from the sound source to the ear of the person, the digital filter may be fixed, the filter corresponding to each sound channel may not be changed or may be adjustable once being set, and the parameters in the digital filter may be adjusted according to different situations
Alternatively, the output unit 305 may adjust parameters corresponding to the distance and the orientation in the digital filter according to the relative distance and the relative orientation between the virtual sound source and the calibration object (i.e., the head of the user), respectively.
Further optionally, the output unit 305 may further adjust a parameter corresponding to the environment in the digital filter according to the virtual position determined by the mapping unit 303, for example, when the virtual position is determined to be in an open outdoor environment in the virtual space, the service device may adjust the parameter corresponding to the environment of the digital filter to match the outdoor environment, and when the virtual position is determined to be in a closed indoor environment in the virtual space, the service device may adjust the parameter corresponding to the environment of the digital filter to match the indoor environment, so that the influence of the environment on the sound propagation process may also be considered in the process of processing the audio signal, so that the sound propagation process simulated by the digital filter is closer to the sound propagation process in the real space, and the sense of reality of the user experience is enhanced.
It can be seen that, with the service device shown in fig. 3, the physical position of the calibration object in the real space can be calculated according to a plurality of captured images of the calibration object containing the head of the MR head display wearer captured by a plurality of high-speed cameras, so that the rotation and displacement of the head of the user can be tracked, and the virtual position of the calibration object in the virtual space can be obtained according to the mapping relationship between the real space and the virtual space, so that the output delay between the channels can be calculated according to the known position of any virtual sound source and the virtual position of the calibration object, and the audio signals corresponding to the virtual sound sources are filtered by the digital filters corresponding to the channels to obtain the audio signals corresponding to each channel in each channel, and then the audio signals corresponding to each channel are output to the audio playing device according to the output delay between each channel, therefore, the time difference of the audio signals heard by the two ears of the user can be adjusted in real time, and the digital filter is used for simulating the propagation process of sound in a real space from the sound source to the ears of the user, so that the audio signal corresponding to the virtual sound source is changed along with the change of the position of the user, and the matching degree between the audio signal corresponding to the virtual sound source and the position of the user can be improved.
Example four
Referring to fig. 4, fig. 4 is another service device disclosed in the embodiment of the present invention. The service device shown in fig. 4 is optimized by the service device shown in fig. 3. Compared with the service apparatus shown in fig. 3, the service apparatus shown in fig. 4 may further include:
a judging unit 306 configured to judge whether the virtual position determined by the mapping unit 303 is within the specified area after the mapping unit 303 determines the virtual position of the calibration object in the virtual space;
in the embodiment of the present invention, the designated area uses a virtual sound source as a center, and uses a preset minimum distance as a radius;
the output unit 305 is further configured to directly output an audio signal corresponding to the virtual sound source to the audio playing device when the determining unit 306 determines that the virtual position of the calibration object is within the specified area;
the processing unit 304 is specifically configured to, when the determining unit 306 determines that the virtual position of the calibration object is not within the specified area, calculate an output delay between channels of the audio playing device worn by the MR head display wearer according to a known position of any virtual sound source in the virtual space and the virtual position of the calibration object;
and, the position determination unit 302 includes:
a first calculation module 3021 configured to calculate a relative position between the calibration object and the high-speed camera that captured the captured image, based on each of the plurality of captured images acquired by the acquisition unit 301;
a second calculation module 3022 configured to calculate a physical position of the calibration object in the real space based on the known positions of the plurality of high-speed cameras in the real space and the relative position between the calibration object and each of the high-speed cameras calculated by the first calculation module 3021;
in this embodiment of the present invention, the second calculating module 3022 is further configured to trigger the mapping unit 303 after calculating the physical position of the calibration object in the real space;
the mapping unit 303 is specifically configured to map the physical position calculated by the second calculating module 3022 to a virtual space, so as to obtain a virtual position of the calibration object in the virtual space.
Further, the processing unit 304 may further include:
an obtaining module 3047, configured to obtain an angular rate of the MR head when the determining module 3043 determines that the positioning feature point does not exist in the scene;
the above-mentioned viewing angle determining module 3044 is further configured to determine a viewing angle direction of the MR head display wearer according to the angular rate acquired by the acquiring module 3047, and trigger the orientation determining module 3045 to perform an operation of determining a relative orientation between the calibration object and the virtual sound source according to the viewing angle direction determined by the viewing angle determining module 3044 and the known position of the virtual sound source in the virtual space.
Optionally, in this embodiment of the present invention, the viewing angle determining module 3044 may further:
the viewing direction of the MR head display wearer is set to be the direction in which the user directly faces the virtual sound source, and the viewing direction is corrected according to the positioning feature point in the scene collected by the control module 3042, and the direction determining module 3045 is triggered to perform the operation of determining the relative direction between the calibration object and the virtual sound source according to the corrected viewing direction and the known position of the virtual sound source in the virtual space.
The service device shown in fig. 4 is implemented to monitor the user position in real time, and process the audio signal corresponding to the virtual sound source in real time according to the position information, so as to improve the matching degree between the audio signal corresponding to the virtual sound source and the user position. Further, by implementing the service device shown in fig. 4, when it is determined that the head of the user is located in the designated area, the audio signal corresponding to the virtual sound source may be directly output to the audio playing device, so that the calculation amount of the service device is reduced without affecting the user experience, calculation resources are made for other operations such as position tracking, and the response speed of the service device is increased. In addition, the service device shown in fig. 4 can calculate the viewing angle direction of the MR head display device wearer by using the angular rate measured by the inertial measurement unit of the MR head display when the scene acquired by the MR head display does not include the positioning feature point, so that the angular rate can be used as a supplement to improve the accuracy of determining the viewing angle direction when the positioning feature point is not shot.
EXAMPLE five
The embodiment of the invention discloses a service device, which comprises:
a memory storing executable program code and a processor coupled to the memory;
wherein the processor calls the executable program code stored in the memory to execute the mixed reality audio control method based on optical positioning shown in fig. 1 or fig. 2.
Furthermore, an embodiment of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the mixed reality audio control method based on optical positioning shown in fig. 1 or fig. 2.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The mixed reality audio control method and the service device based on optical positioning disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A mixed reality audio control method based on optical positioning is characterized by comprising the following steps:
acquiring a plurality of shot images which are shot by a plurality of high-speed cameras and contain calibration objects, wherein the number of the shot images is at least three, and the calibration objects are used for calibrating the heads of MR head display wearers;
determining the physical position of the calibration object in the real space according to the plurality of shot images;
mapping the physical position to a virtual space to obtain a virtual position of the calibration object in the virtual space;
calculating output time delay between each sound channel of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object;
respectively filtering the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the channels to obtain the audio signals corresponding to each channel in each channel, and respectively outputting the audio signals corresponding to each channel to the audio playing device according to the output delay between the channels;
and calculating output time delay between each sound channel of the audio playing device worn by the MR head display wearer according to the known position of any virtual sound source in the virtual space and the virtual position of the calibration object, wherein the calculation comprises the following steps:
determining the relative distance between any virtual sound source and the calibration object according to the known position of the virtual sound source in the virtual space and the virtual position of the calibration object;
controlling the MR head display to acquire scenes in a visual field range by using a binocular camera simulating the work of human eyes;
judging whether a positioning feature point exists in the scene or not, and if so, determining the visual angle direction of the MR head display wearer according to the known position of the positioning feature point in the real space and the physical position of the calibration object;
determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and calculating the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative direction.
2. The method of claim 1, wherein the determining the physical location of the landmark in real space from the plurality of captured images comprises:
calculating the relative position between the calibration object and a high-speed camera shooting the shot image according to each shot image in the shot images;
and calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
3. The method of claim 1, further comprising:
if the positioning feature points do not exist in the scene, acquiring the angular rate of the MR head when the MR head rotates;
and determining the visual angle direction of the MR head display wearer according to the angular rate, and executing the step of determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space.
4. The method for mixed reality audio control based on optical positioning according to any one of claims 1-3, wherein after the obtaining the virtual position of the calibration object in the virtual space, the method further comprises:
judging whether the virtual position is in a designated area, wherein the designated area takes the virtual sound source as a center and takes a preset minimum distance as a radius;
if yes, directly outputting the audio signal corresponding to the virtual sound source to the audio playing equipment;
if not, the known position of any virtual sound source in the virtual space and the virtual position of the calibration object are executed, and the output time delay between the sound channels of the audio playing equipment worn by the MR head display wearer is calculated.
5. A service device, comprising:
the device comprises an acquisition unit, a calibration unit and a control unit, wherein the acquisition unit is used for acquiring a plurality of shot images which are shot by a plurality of high-speed cameras and contain calibration objects, the number of the shot images is at least three, and the calibration objects are used for calibrating the head of an MR head display wearer;
the position determining unit is used for determining the physical position of the calibration object in the real space according to the plurality of shot images;
the mapping unit is used for mapping the physical position to a virtual space to obtain the virtual position of the calibration object in the virtual space;
the processing unit is used for calculating output time delay among sound channels of the audio playing equipment worn by the MR head display wearer according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
the output unit is used for filtering the audio signals corresponding to the virtual sound source by using the digital filters corresponding to the channels to obtain the audio signals corresponding to each channel in each channel, and respectively outputting the audio signals corresponding to each channel to the audio playing equipment according to the output delay among the channels;
the processing unit includes:
the distance determining module is used for determining the relative distance between the virtual sound source and the calibration object according to the known position of any virtual sound source in a virtual space and the virtual position of the calibration object;
the control module is used for controlling the MR head display to acquire scenes in a visual field range by using a binocular camera for simulating the work of human eyes;
the judging module is used for judging whether the scene has the positioning feature points or not;
the visual angle determining module is used for determining the visual angle direction of the MR head display wearer according to the known position of the positioning characteristic point in the real space and the physical position of the calibration object when the judging module judges that the positioning characteristic point exists in the scene;
the direction determining module is used for determining the relative direction between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space;
and the processing module is used for calculating the output time delay between each sound channel of the audio playing equipment worn by the MR head display wearer according to the relative distance and the relative direction.
6. The service apparatus according to claim 5, wherein the position determination unit includes:
the first calculation module is used for calculating the relative position between the calibration object and a high-speed camera for shooting the shot image according to each shot image in the shot images;
and the second calculation module is used for calculating the physical position of the calibration object in the real space according to the known positions of the high-speed cameras in the real space and the relative position between the calibration object and each high-speed camera.
7. The service device of claim 5, wherein the processing unit further comprises:
the acquisition module is used for acquiring the angular rate of the MR head when the MR head rotates when the judgment module judges that the positioning feature points do not exist in the scene;
and the visual angle determining module is further used for determining the visual angle direction of the MR head display wearer according to the angular rate, and triggering the orientation determining module to execute the step of determining the relative orientation between the calibration object and the virtual sound source according to the visual angle direction and the known position of the virtual sound source in the virtual space.
8. The service apparatus according to any one of claims 5 to 7, further comprising:
a judging unit, configured to judge whether the virtual position is within a specified area, where the specified area takes the virtual sound source as a center and a preset minimum distance as a radius;
the output unit is further configured to directly output the audio signal corresponding to the virtual sound source to the audio playing device when the judging unit judges that the virtual position is within the designated area;
the processing unit is specifically configured to, when the determining unit determines that the virtual position is not within the specified area, calculate an output delay between channels of an audio playing device worn by an MR head display wearer according to a known position of any virtual sound source in a virtual space and the virtual position of the calibration object.
CN201710781258.3A 2017-09-01 2017-09-01 Mixed reality audio control method based on optical positioning and service equipment Active CN107632704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710781258.3A CN107632704B (en) 2017-09-01 2017-09-01 Mixed reality audio control method based on optical positioning and service equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710781258.3A CN107632704B (en) 2017-09-01 2017-09-01 Mixed reality audio control method based on optical positioning and service equipment

Publications (2)

Publication Number Publication Date
CN107632704A CN107632704A (en) 2018-01-26
CN107632704B true CN107632704B (en) 2020-05-15

Family

ID=61099751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710781258.3A Active CN107632704B (en) 2017-09-01 2017-09-01 Mixed reality audio control method based on optical positioning and service equipment

Country Status (1)

Country Link
CN (1) CN107632704B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3090281A1 (en) * 2018-02-15 2019-08-22 Magic Leap, Inc. Dual listener positions for mixed reality
CN110196914B (en) 2019-07-29 2019-12-27 上海肇观电子科技有限公司 Method and device for inputting face information into database
CN111915918A (en) * 2020-06-19 2020-11-10 中国计量大学 System and method for calibrating automobile whistling snapshot device on site based on dynamic characteristics
CN113204326B (en) * 2021-05-12 2022-04-08 同济大学 Dynamic sound effect adjusting method and system based on mixed reality space
CN114363794B (en) * 2021-12-27 2023-10-24 北京百度网讯科技有限公司 Audio processing method, device, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702264A (en) * 2012-09-27 2014-04-02 英特尔公司 Camera driven audio spatialization
CN104954930A (en) * 2015-06-03 2015-09-30 冠捷显示科技(厦门)有限公司 Method for automatically adjusting sound direction and time delay of audible device and achieving best sound effects
CN106296348A (en) * 2016-08-03 2017-01-04 陈涛 The indoor scene analog systems realized based on virtual reality method and method
CN106354472A (en) * 2016-11-02 2017-01-25 广州幻境科技有限公司 Control method used for sound in virtual reality environment and system thereof
CN106774844A (en) * 2016-11-23 2017-05-31 上海创米科技有限公司 A kind of method and apparatus for virtual positioning
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN107211216A (en) * 2014-12-19 2017-09-26 诺基亚技术有限公司 Method and apparatus for providing virtual audio reproduction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702264A (en) * 2012-09-27 2014-04-02 英特尔公司 Camera driven audio spatialization
CN107211216A (en) * 2014-12-19 2017-09-26 诺基亚技术有限公司 Method and apparatus for providing virtual audio reproduction
CN104954930A (en) * 2015-06-03 2015-09-30 冠捷显示科技(厦门)有限公司 Method for automatically adjusting sound direction and time delay of audible device and achieving best sound effects
CN106296348A (en) * 2016-08-03 2017-01-04 陈涛 The indoor scene analog systems realized based on virtual reality method and method
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN106354472A (en) * 2016-11-02 2017-01-25 广州幻境科技有限公司 Control method used for sound in virtual reality environment and system thereof
CN106774844A (en) * 2016-11-23 2017-05-31 上海创米科技有限公司 A kind of method and apparatus for virtual positioning

Also Published As

Publication number Publication date
CN107632704A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
CN107632704B (en) Mixed reality audio control method based on optical positioning and service equipment
US11758346B2 (en) Sound localization for user in motion
US10966026B2 (en) Method and apparatus for processing audio data in sound field
US9794722B2 (en) Head-related transfer function recording using positional tracking
KR102448284B1 (en) head mounted display tracking system
CN109416585B (en) Virtual, augmented and mixed reality
KR101730737B1 (en) Distance adaptive holographic displaying method and device based on eyeball tracking
US10959038B2 (en) Audio system for artificial reality environment
US10542368B2 (en) Audio content modification for playback audio
KR20190091474A (en) Distributed Audio Capturing Techniques for Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) Systems
US10412527B1 (en) Head-related transfer function determination using base stations
EP3595336A1 (en) Audio apparatus and method of operation therefor
JP2020520576A (en) Apparatus and related method for presentation of spatial audio
JP2020520576A5 (en)
US10798515B2 (en) Compensating for effects of headset on head related transfer functions
US20230245642A1 (en) Reverberation gain normalization
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
CN111243070B (en) Virtual reality presenting method, system and device based on 5G communication
US20130215010A1 (en) Portable electronic equipment and method of visualizing sound
CN112927718A (en) Method, device, terminal and storage medium for sensing surrounding environment
US20210120361A1 (en) Audio adjusting method and audio adjusting device
JP7478100B2 (en) Reverberation Gain Normalization
CN116764195A (en) Audio control method and device based on virtual reality VR, electronic device and medium
CN117224954A (en) Game processing method, game processing device, electronic equipment and computer readable storage medium
CN117348721A (en) Virtual reality data processing method, controller and virtual reality device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant