CN114543844A - Audio playing processing method and device of wireless audio equipment and wireless audio equipment - Google Patents

Audio playing processing method and device of wireless audio equipment and wireless audio equipment Download PDF

Info

Publication number
CN114543844A
CN114543844A CN202210366527.0A CN202210366527A CN114543844A CN 114543844 A CN114543844 A CN 114543844A CN 202210366527 A CN202210366527 A CN 202210366527A CN 114543844 A CN114543844 A CN 114543844A
Authority
CN
China
Prior art keywords
imu
loudspeaker
uwb
zero offset
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210366527.0A
Other languages
Chinese (zh)
Other versions
CN114543844B (en
Inventor
童伟峰
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bestechnic Shanghai Co Ltd
Original Assignee
Bestechnic Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bestechnic Shanghai Co Ltd filed Critical Bestechnic Shanghai Co Ltd
Publication of CN114543844A publication Critical patent/CN114543844A/en
Application granted granted Critical
Publication of CN114543844B publication Critical patent/CN114543844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The embodiment of the invention provides an audio playing processing method and device of wireless audio equipment and the wireless audio equipment. The method comprises the following steps: detecting distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device, and establishing a constraint relation between the distances and zero offset of the IMU; obtaining a plurality of constraint relations through a plurality of times of measurement, and obtaining the zero offset of the IMU from the plurality of constraint relations; and determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters. The invention calibrates the zero offset of the IMU by measuring the distance between the wireless audio equipment and another UWB equipment in a static state, thereby improving the measurement precision of the IMU.

Description

Audio playing processing method and device of wireless audio equipment and wireless audio equipment
Technical Field
The present invention relates to the field of audio processing, and in particular, to an audio playing processing method and apparatus for a wireless audio device, and a wireless audio device.
Background
In a game context, such as a virtual reality game context, as the headset wearer moves while wearing the headset, such as moving in various directions, the position or orientation of the headset relative to the sound source in the game context changes. The current true wireless stereo earphone still keeps the preset stereo effect when the earphone moves; for example, in the case where the stereo effect of the gunshot sound is set to be emitted from the front, and the headphone wearer turns back 180 degrees, the orientation of the sound source of the gunshot sound has already changed from the front to the rear of the headphone wearer, but the stereo effect heard by the headphone wearer still maintains the effect emitted from the front of the headphone wearer, and the wearer feels that the sound source of the gunshot sound suddenly shifts in position, which seriously affects the real experience of the wearer.
In addition, when a user wears wearable equipment such as smart glasses, the position or orientation of the smart glasses with respect to a sound source in a game context changes as the smart glasses wearer moves, for example, moves in various directions while wearing the smart glasses. The actual experience of the wearer is also severely affected if the audio source is not adjusted.
In the prior art, along with the movement of wearable devices such as earphones and smart glasses, the position and orientation of the wearable device can be measured by using an Inertial Measurement Unit (IMU), so that the played audio is changed, and the position and orientation of the sound source sensed by the left and right ears are not changed when the left and right ears listen to the audio. However, due to the manufacturing process, the data measured by the IMU usually has some errors. A typical error is an offset error (also referred to as zero offset), i.e., the gyroscope and accelerometer will have non-zero data output even in the absence of rotation or acceleration. To obtain displacement data, the output of the accelerometer or gyroscope needs to be integrated twice, even a small offset error is amplified after the two integrations, and the displacement error is accumulated continuously as time advances, so that the measurement accuracy of the IMU is affected.
Disclosure of Invention
In order to solve the above problem, embodiments of the present invention provide an audio playing processing method and apparatus for a wireless audio device, and a wireless audio device, which calibrate a zero offset of an IMU by measuring a distance between the wireless audio device and another UWB device in a stationary state, thereby improving a measurement accuracy of the IMU.
In a first aspect, an embodiment of the present invention provides an audio playing processing method for a wireless audio device, where the wireless audio device includes a first speaker device and a second speaker device, and at least one speaker device includes an IMU, and the method includes: detecting distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device, and establishing a constraint relation between the distances and zero offset of the IMU; obtaining a plurality of constraint relations through a plurality of measurements, and obtaining the zero offset of the IMU from the plurality of constraint relations; and determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
In a second aspect, an embodiment of the present invention provides an audio playing processing apparatus for a wireless audio device, where the wireless audio device includes a first speaker device and a second speaker device, and at least one speaker device includes an IMU, and the audio playing processing apparatus includes: the detection module is used for detecting the distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device and establishing a constraint relation between the distances and the zero offset of the IMU; the determining module is used for obtaining a plurality of constraint relations through a plurality of times of measurement, and obtaining the zero offset of the IMU from the constraint relations; and the adjusting module is used for determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
In a third aspect, an embodiment of the present invention provides a wireless audio device, including a first speaker device and a second speaker device, where the first speaker device and the second speaker device each include a UWB module; the UWB module is configured to acquire air transmission time of UWB signals between the first loudspeaker device and the fixed UWB device, the second loudspeaker device and the fixed UWB device, and then the distance between the first loudspeaker device and the fixed UWB device and the distance between the second loudspeaker device and the fixed UWB device are calculated according to the air transmission time; the first loudspeaker device or/and the second loudspeaker device comprises an IMU (inertial measurement Unit) which is used for detecting the motion parameters of the first loudspeaker device or/and the second loudspeaker device; the wireless audio equipment further comprises an audio playing processing module, which is used for adjusting the zero offset of the IMU according to the distance between the first loudspeaker equipment and the fixed UWB equipment, which is obtained by the UWB module, and the second loudspeaker equipment and the fixed UWB equipment, so as to determine the motion parameters detected by the IMU; wherein, the audio playing processing module comprises: the detection module is used for detecting the distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device and establishing a constraint relation between the distances and the zero offset of the IMU; the determining module is used for obtaining a plurality of constraint relations through a plurality of times of measurement, and obtaining the zero offset of the IMU from the constraint relations; and the adjusting module is used for determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
The embodiment of the invention discloses an audio playing processing method and device of wireless audio equipment and the wireless audio equipment.
Specific embodiments of the present invention are disclosed in detail with reference to the following description and drawings, indicating the manner in which the principles of the invention may be employed. It should be understood that the embodiments of the invention are not so limited in scope. The embodiments of the invention include many variations, modifications and equivalents within the spirit and scope of the appended claims.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments, in combination with or instead of the features of the other embodiments.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps or components.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a process flow diagram of an audio playing processing method of a wireless audio device according to an embodiment of the present invention;
FIG. 2 is a flow chart of a process for adjusting and playing a wireless audio device based on motion parameters according to an embodiment of the present invention;
fig. 3 is a flowchart of a process for determining in advance the measurement method of the filter list in step S202 of the embodiment shown in fig. 2, in which the position and/or orientation of the sound source and the filter coefficient corresponding to the position and/or orientation of the sound source are stored;
fig. 4 is a schematic structural diagram of an audio playback processing apparatus of a wireless audio device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a wireless audio device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
The embodiment of the invention discloses an audio playing processing method of wireless audio equipment, which measures the distance from UWB equipment in another static state to calibrate the zero offset of an IMU by utilizing a UWB module contained in loudspeaker equipment of wearable equipment so as to improve the measurement precision of the IMU and enable the adjustment of the position and the orientation of a sound source to be more accurate.
Fig. 1 is a processing flow chart of an audio playing processing method of a wireless audio device according to an embodiment of the present invention. As shown in fig. 1, in step S101, distances between the first speaker device and the second speaker device and a fixed UWB device are detected, and a constraint relationship between the distances and a zero offset of the IMU is established. In this embodiment, the first speaker device and/or the second speaker device are mounted with an inertial measurement unit capable of detecting a motion parameter thereof, and the inertial measurement unit includes, but is not limited to, an accelerometer (e.g., three single-axis accelerometers), a gyroscope (e.g., three single-axis gyroscopes), a magnetometer, and the like. The inertial measurement unit may detect a motion parameter of the first loudspeaker device and/or the second loudspeaker device, the motion parameter including, but not limited to, angular velocity, acceleration, displacement, position and orientation, and the motion parameter may be any one or a combination of several of them. For example, an accelerometer is configured to measure acceleration of the wearable device in three-dimensional space, and a gyroscope is configured to measure angular velocity of the wearable device in three-dimensional space. Furthermore, the received audio data is adjusted and played by using the obtained motion parameters, so that the positions and the directions of the sound sources are not changed when the left ear and the right ear listen to the audio. That is, by adjusting the stereo effect of the audio signals in near real-time in response to the relative motion of the wearable device, the wearer is made more realistic with respect to the location and localization of the sound source, e.g., for a stationary sound source, the wearer feels a stationary sound source by adjusting the audio signals of the left and right speakers in near real-time, regardless of how the wearable device moves (translates, rotates, accelerates, etc.). In some applications, the wireless audio device may be a wireless headset with one speaker device for each of the left and right headsets; the glasses may also be smart glasses, for example, with one speaker device near each of the left and right ears of the smart glasses. In this embodiment, one of the left and right speaker devices includes an IMU, and each of the left and right speaker devices has a UWB module; alternatively, there are IMU and UWB modules in the left and right speaker devices, respectively.
In one embodiment, if one of the left and right speaker devices contains an IMU, for example the left speaker device includes an IMU and the right speaker device does not, then the motion parameters of the right speaker device may be calculated from the motion parameters measured by the IMU of the left speaker device. Since the width of the user's (adult) head-bag is about 19cm or so when worn, the distance between the two speaker devices can be considered to be known, such as 19 cm. The IMU is fixed in the left speaker device, and the displacement of the left speaker device can be obtained according to the acceleration measured by the accelerometer of the IMU included therein, and the rotation angle of the left speaker device can be obtained according to the angular velocity measured by the gyroscope of the IMU. The left loudspeaker device contains an IMU and the two loudspeaker devices are fixed in or on both ears, and the distance between the two loudspeaker devices can be measured (e.g. using UWB modules in the two loudspeaker devices) or known in advance (e.g. approximately 19cm in width of the adult's head), and the two loudspeaker devices can be regarded as a rigid body. Therefore, the displacement and the rotation angle of the right loudspeaker device can be obtained according to the motion parameters of the left loudspeaker device and the distance between the two loudspeaker devices. Further, the right speaker device may perform adjustment of the stereo effect of the audio signal according to the calculated motion parameter.
In another embodiment, if the left and right speaker devices each include an IMU, the stereo effect of the audio signal may be adjusted by obtaining respective motion parameters.
In the embodiment of the present invention, the fixed UWB device may be a smart phone, a smart wearable device, a notebook computer, an IPAD, or the like, which includes a UWB module, or a UWB beacon or other wireless device with a UWB module, and the fixed UWB device is in a stationary state. The first loudspeaker device and the second loudspeaker device respectively comprise UWB modules, so that the distance between the fixed UWB device and the first loudspeaker device and the distance between the fixed UWB device and the second loudspeaker device can be measured by using a unilateral bidirectional UWB ranging method or a bilateral bidirectional UWB ranging method.
In the embodiment of the present invention, UWB ranging may employ time-of-flight (TOF) principle. The time-of-flight principle refers to various methods for measuring the time of flight, and more specifically, refers to the time taken by an object or particle or other waves such as sound waves or electric waves to travel a certain distance in a certain medium. The TOF ranging technology can be understood as a Time of Flight Measurement (Time of Flight Measurement) method, belongs to a two-way ranging technology, and mainly measures the distance between nodes by using the Time of Flight of a signal going back and forth between two asynchronous transceivers (transceivers).
In one embodiment, the air transmission time of the UWB signal between the wireless audio device and the fixed UWB device (as can be understood by those skilled in the art, the distance between the wireless audio device and the fixed UWB device is referred to in this embodiment, and is actually the distance between two speaker devices including the UWB module in the wireless audio device and the fixed UWB device, respectively) can be obtained according to a one-sided two-way ranging technique, for example, by the following manners:
1. the UWB module of the fixed UWB equipment sends a pulse signal at the time of Ta1 and receives a response signal returned by the wireless audio equipment at the time of Ta 2;
2. the UWB module of the wireless audio device receives the pulse signal sent by the fixed UWB device at the Tb1 moment and sends the response signal to the fixed UWB device at the Tb2 moment;
3. and (3) calculating: the air transmission time T of the UWB signal between the wireless audio device and the fixed UWB device is as follows:
Figure BDA0003587355790000081
at this time, the distances between the two speaker devices of the wireless audio device and the fixed UWB device are obtained according to the air transmission time T, and the distances are:
Figure BDA0003587355790000082
wherein S is the distance between the wireless audio device and the fixed UWB device, and C is the speed of light.
It should be noted that, in the two-way time-of-flight ranging mode, if a fixed UWB device transmits a UWB signal and a wireless audio device responds, the time difference from the reception of a message to the return of a response by the wireless audio device depends, and if the time is longer, the accuracy of the system is also low. Meanwhile, regarding the clock accuracy of the fixed UWB device and the wireless audio device, if the clock accuracy deviation of the fixed UWB device is e1The clock precision deviation of the wireless audio equipment is e2Then the measured error of the air transmission time is
Figure BDA0003587355790000083
The foregoing embodiment adopts a single-side two-way ranging method, and generally, in the single-side two-way ranging method, if the error deviation of two clocks is 10PPM and the processing time is 100us, the error may reach 150 cm.
In another embodiment, bilateral two-way ranging may be used to obtain the air time of UWB signal transmission between the wireless audio device and the fixed UWB device, for example, by:
if a pulse signal is sent twice between the fixed UWB device and the wireless audio device to obtain two response signals, and the clock accuracy deviation between the fixed UWB device and the wireless audio device is considered, the air transmission time T of the UWB signal between the wireless audio device and the fixed UWB device at this time is:
Figure BDA0003587355790000091
wherein:
trounda is the time length from the pulse signal sending to the response signal receiving of the fixed UWB equipment when the pulse signal is sent for the first time; t is troundB is the time length from the pulse signal sending to the response signal receiving of the fixed UWB equipment when the pulse signal is sent for the second time; t is treplyA is the time length from the receiving of the pulse signal to the sending of the response signal of the wireless audio equipment when the pulse signal is sent for the first time; t is treplyAnd B is the time length from the receiving of the pulse signal to the sending of the response signal by the wireless audio equipment when the pulse signal is sent for the second time.
In this embodiment, the air transmission time T of the two-sided two-way ranging method is calculated by using a calculation formula of the one-sided two-way ranging method and a clock precision error added to the system, and the specific calculation method is as follows:
in this process, troundA is the time length from the pulse signal sending to the response signal receiving of the fixed UWB equipment when the pulse signal is sent for the first time; t is troundB is the time length from the pulse signal sending to the response signal receiving of the fixed UWB equipment when the pulse signal is sent for the second time; t is treplyA is the time length from the receiving of the pulse signal to the sending of the response signal of the wireless audio equipment when the pulse signal is sent for the first time; t is treplyAnd B is the time length from the pulse signal receiving to the response signal sending of the wireless audio equipment when the pulse signal is sent for the second time, and the time length is obtained according to the unilateral two-way ranging principle:
Figure BDA0003587355790000092
Figure BDA0003587355790000093
since equations 1 and 2 are approximately equal in time, then:
Figure BDA0003587355790000094
Figure BDA0003587355790000101
and
Figure BDA0003587355790000102
adding formula 1 and formula 2 yields:
4T=troundA(1+e1)-treplyB(1+e2)+troundB(1+e2)-treplyA(1+e1)
=(troundA-treplyA)(1+e1)+(troundB-treplyB)(1+e2) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (O- -C- -O- -N- -O- -C- -O- -C- -O- -R- -O- -R- -O- -R- -C- -R- -O- -C- -R- -C- -R- -C- -R
Substituting formula 3 into formula 5 to obtain
Figure BDA0003587355790000103
The same principle is that: substituting formula 4 into formula 5 to obtain
Figure BDA0003587355790000104
In some embodiments, when e1 and e2 in formula 6 or formula 7 are set to 0, T can be directly obtained from formula 6 or formula 7.
In other embodiments, further derivation may be possible according to equations 6 and 7. Wherein, since equation 6 is equal to equation 7, it can be obtained:
Figure BDA0003587355790000111
therefore, the temperature of the molten metal is controlled,
Figure BDA0003587355790000112
also, in the two-sided two-way ranging method, the error is related to the response time of the wireless audio device, but not so large, and generally speaking, under an experimental condition and considering some other errors, if the clock deviation of both sides is 10PPM and the recovery time is 100us, the error is less than 8cm, which is a qualitative improvement compared with 150cm of the one-sided two-way ranging method.
After the air transmission time is obtained by the UWB ranging method of the above embodiment, the distance between any one of the speaker devices of the wireless audio device and the fixed UWB device may be calculated in real time by multiplying the air speed.
In the embodiment of the present invention, in step S101, after obtaining the distance between the first speaker device and/or the second speaker device and the fixed UWB device, a constraint relationship between the distance and a zero offset of the IMU may be established. The zero offset of the IMU of the present embodiment mainly includes an acceleration zero offset measured by an accelerometer and/or an angular velocity zero offset measured by a gyroscope.
In the embodiment of the invention, the measurement values of the accelerometer and the gyroscope are respectively the following equations:
aB=TaKa(aS+ba+va) - - - - - - - - (1)
ωB=TgKgS+bg+vg) - - - - - - - - - - - - - (O- -X- -O) - -formula (2)
Where the superscript a denotes an accelerometer, g denotes a gyroscope, B denotes an orthogonal reference coordinate system, S denotes a non-orthogonal selected coordinate system, T denotes a transformation matrix of axis deviations, K denotes a scale factor, which can be regarded as a constant, aS、ωSThe true value is shown, and b and ν respectively show Bias (zero Bias) and white noise.
In some embodiments, for example, in the case of a wireless headset, due to the presence of a zero offset, there may be a large error in the displacement and rotation angle of the headset obtained by the IMU over time. From an initial moment, the position (derived from acceleration) and pose or orientation or direction (derived from angular velocity) of the left and right earphones can be measured using the IMU, but both position and direction are affected by the IMU zero offset. The distance between the left and right earphones and the fixed UWB device can be measured at any time from the initial time. This distance is an equation or constraint of the IMU zero bias, i.e. the distance of the left earpiece from the fixed UWB device and the distance of the right earpiece from the fixed UWB device are equations or constraints of b in equation (1).
For example, if the acceleration of equation (1) is integrated with the left headphone at initial time T0 as the origin of coordinates to obtain the velocity of the left headphone and the velocity of the left headphone is integrated to obtain the displacement of the left headphone from initial time T0, and if the displacement is S1 (x1, y1, z1) at time T1, the position of the left headphone at time T1 is (x1, y1, z1) because the left headphone at initial time is the origin of coordinates. If the left earphone is pointed to the left earphone at the initial time T0 and the direction is the X axis, and the distance between the left earphone and the right earphone is d0, the position of the right earphone at the time T0 is (d0,0, 0). According to equation (2), the angular velocity of the right headphone is integrated to obtain the angle of rotation of the right headphone at time T1 as (thita1, thita2, thita3), and since the right headphone rotates around the left headphone (origin of coordinates) at time T0, the displacement resulting from the rotation is Sw (cos (thita1) × d0, cos (thita2) × d0, cos (thita3) × d 0). The left and right earphones constitute a rigid body, so the displacement of the right earphone caused by the acceleration of equation (1) is also (x1, y1, z1) at time T1. Thus at time T1, the position of the right earpiece is (d0,0,0) + Sw + S1. According to the UWB ranging, the distance between the fixed UWB device and the left and right earphones can be measured by the single-sided two-way or two-sided two-way ranging method of the above embodiments. Thus, at time T1, two equations or constraints may be derived from the two spacings. Equation (1) includes zero offset of acceleration, and equation (2) includes zero offset of angular velocity, so that two equations or constraints about zero offset of acceleration and zero offset of angular velocity can be obtained.
The left earphone at the initial time T0 is used as the origin of coordinates, the direction of the left earphone pointed to the left earphone at the time T0 is used as the X axis, and an arbitrary position at the time T0 can also be selected as the origin of coordinates, which does not affect the above conclusion, but the transformation matrices T of equations (1) and (2) have a fixed transformation.
In step S102, a plurality of constraint relationships of the IMU zero offset may be obtained through the plurality of measurements in step S101, and then the IMU zero offset may be obtained from the plurality of constraint relationships by using a least squares method or an interpolation fitting method.
In some embodiments, once the distances at the plurality of times are obtained in step S101, a plurality of equations or constraints about the IMU zero-bias can be obtained, and then the IMU zero-bias can be obtained by a least squares method, an interpolation method, or a fitting method.
In specific implementation, at multiple times of T1, T2, T3, … and the like, multiple equations or constraints on zero offset of acceleration and zero offset of angular velocity can be obtained. Because the zero offset is not changed much or is basically unchanged in a short time (such as several seconds, tens of seconds, hundreds of seconds), a plurality of equations or constraints about the zero offset of the acceleration and the zero offset of the angular velocity at a plurality of moments (such as 3, 4, 5, 6, 10, etc.) can be obtained in a short time (1S or several S), so that the zero offset of the time can be obtained by a minimum two-way method or an interpolation fitting method.
In step S103, according to the zero offset of the IMU, the motion parameter detected by the IMU is determined, so that the first speaker device and the second speaker device adjust and play the audio signal to be played based on the motion parameter.
After obtaining the IMU zero offset, the displacement and/or rotation angle of the two loudspeaker devices may be obtained according to equations (1) and (2) and the motion parameters (e.g., acceleration and angular velocity) output by the accelerometer and gyroscope. Moreover, since the IMU zero offset is calibrated by the methods of step S101 and step S102, the obtained motion parameters are more accurate. Thus, even if a long time (for example, several tens of minutes or several hours or more) passes, the more accurate motion parameters of the two speakers can be obtained.
In still other embodiments, if the second speaker apparatus does not include an IMU, the first speaker apparatus also transmits the motion parameters to the second speaker apparatus to cause the second speaker apparatus to adjust and play the audio signal to be played based on the motion parameters.
In some embodiments, the first speaker device may transmit the detected motion parameters of the first speaker device to the second speaker device, e.g., the motion parameters may be communicated over a communication connection between the first speaker device and the second speaker device. The audio signals to be played by the first loudspeaker device and the second loudspeaker device are adjusted separately based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by variations in the position and/or orientation of the respective earpiece with respect to the sound source. When the position and/or orientation at which the wearer is located changes relative to the position of the sound source, i.e. the position and/or orientation of the earpiece changes relative to the position of the sound source, the IMU located on the first loudspeaker device is able to detect the kinetic parameters indicative of the change in position and/or orientation. Each loudspeaker device performs stereo processing-based adjustment on the audio signal to be played based on the motion parameter to simulate generation of an audio signal that the wearer should hear in reality after the motion relative to the sound source position. The method adjusts the stereo generated by the earphone in time based on the motion parameters so as to simulate the stereo with the changed relative position with the sound source.
In specific implementation, as shown in fig. 2, the adjusting and playing of the audio signal to be played by the first speaker device and the second speaker device based on the motion parameter includes the following steps:
step S201, determining a position and/or an orientation of each of the first loudspeaker device and the second loudspeaker device relative to the sound source based on the motion parameters. The individual loudspeaker devices determine the respective position and/or orientation relative to the sound source by analyzing and calculating them on the basis of the detected or received motion parameters. Step S202, based on the determined positions and orientations of the speaker devices relative to the sound source, selecting the filter coefficient corresponding to the closest position and orientation relative to the sound source in a predetermined filter list. In which the predetermined filter list stores therein the position and orientation of the sound source and filter coefficients corresponding to the position and orientation of the sound source, which can be determined in advance by measurement. Step S203, performing filtering processing on the audio signals to be played by the respective speaker devices using the selected filter coefficients. The filtering process enables the processed filtered signal to simulate stereo variations caused by the motion of the headphones relative to the sound source position.
In some embodiments, as shown in fig. 3, the filter list in which the position and/or orientation of the sound source and the filter coefficient corresponding to the position and/or orientation of the sound source are stored in step S202 may be predetermined by the following measurement methods: step S301, head related transformation functions for different positions and/or orientations with respect to the sound source are measured in advance. A Head Related Transfer Function (HRTF) is an audio localization algorithm that can transform sound effects in a game so that a user feels that a sound source comes from different positions and directions. Step S302, based on the head-related transformation functions of different positions and/or orientations measured in advance, corresponding filter coefficients are determined. Specifically, it is possible to emit sound from a plurality of fixed sound source positions by mounting the head model at the position of the microphone to the eardrum. The sound collected by the microphone is analyzed, the changed specific sound data is obtained by analyzing and calculating the collected sound, and a filter is designed based on the changed specific sound data to simulate the changed specific sound data. The corresponding filter coefficients are determined with a pre-measured head-related transform function as the transfer function of the filter. In step S303, filter coefficients are stored in association with positions and/or orientations, thereby constructing the filter list. The position and/or orientation of the sound source and the corresponding filter coefficients are stored, forming a filter list for table lookup, facilitating fast determination of the filter coefficients.
In some embodiments, the filter is a digital filter, and the filter coefficients of the digital filter can be adaptively configured according to the motion parameters (one or more of angular velocity, acceleration, position and orientation) of the headset.
In some embodiments, the echo may also be used to make the sound effect more stereo, and the filter may be designed by acquiring the changed specific sound and echo data from the sound and echo.
It can be known from the above embodiments that, in the audio playing processing method of the wireless audio device according to the embodiments of the present invention, the UWB module included in the speaker device of the wearable device is used to measure the distance from another UWB device in a stationary state to calibrate the zero offset of the IMU, so that the measurement accuracy of the IMU is improved, and the adjustment of the sound source position/orientation is more accurate.
Having described the method of an exemplary embodiment of the present invention, an audio playback processing apparatus of a wireless audio device of an exemplary embodiment of the present invention is described next with reference to fig. 4. The implementation of the device can be referred to the implementation of the method, and repeated details are not repeated. The terms "module" and "unit", as used below, may be software and/or hardware that implements a predetermined function. While the modules described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
Fig. 4 is a schematic structural diagram of an audio playing processing device of a wireless audio device according to an embodiment of the present invention. As shown in fig. 4, the wireless audio device includes a first speaker device and a second speaker device, and an IMU is included in at least one of the speaker devices. In this embodiment, the audio playback processing apparatus includes: a detection module 401, configured to detect distances between the first speaker device and the second speaker device and a fixed UWB device, and establish a constraint relationship between the distances and a zero offset of the IMU; a determining module 402, configured to obtain a plurality of constraint relationships through multiple measurements, and obtain a zero offset of the IMU from the plurality of constraint relationships; an adjusting module 403, configured to determine, according to the zero offset of the IMU, a motion parameter detected by the IMU, so that the first speaker device and the second speaker device adjust and play the audio signal to be played based on the motion parameter.
An embodiment of the present invention further provides a wireless audio device, as shown in fig. 5, including a first speaker device 51 and a second speaker device 52, where the first speaker device 51 and the second speaker device 52 both include UWB modules 511 and 521. A UWB module 511, located in the first speaker device, configured to acquire an air transmission time of a UWB signal between the first speaker device 51 and the fixed UWB device 53, and calculate a UWB distance 1 between the first speaker device 51 and the fixed UWB device 53 according to the air transmission time; the UWB module 521 is located in the second speaker device 52, and is configured to acquire an air transmission time of the UWB signal between the second speaker device 52 and the fixed UWB device 53, and then calculate the UWB distance 2 between the second speaker device 52 and the fixed UWB device 53 according to the air transmission time.
In this embodiment, the first speaker device 51 or/and the second speaker device 52 further include an IMU module 512, configured to detect a motion parameter of the first speaker device or/and the second speaker device. As mentioned in the above embodiments, one of the first and second loudspeaker devices comprises an IMU, or both loudspeaker devices may comprise an IMU.
In one embodiment, as shown in fig. 5, if one of the loudspeaker devices includes an IMU, for example, the first loudspeaker device 51 includes an IMU module 512, but not the second loudspeaker device 52, then the motion parameters of the second loudspeaker device 52 may be calculated based on the motion parameters measured by the IMU module of the first loudspeaker device 51. Because the distance between two speaker devices can be measured (e.g., using UWB modules in the two speaker devices) or known in advance (e.g., approximately 19cm in width of an adult's head), when worn, the two speaker devices can be considered as one rigid body. Thus, the first loudspeaker device 51 may transmit the detected movement parameter to the second loudspeaker device 52, which may be communicated over a communication connection 54 between the first loudspeaker device 51 and the second loudspeaker device 52, for example, as shown in fig. 5. Accordingly, the second speaker device 52 can obtain the displacement and the rotation angle of the second speaker device 52 according to the motion parameter transmitted by the first speaker device 51 and the distance between the two speaker devices. Further, the second speaker device 52 may perform adjustment of the stereo effect of the audio signal according to the calculated motion parameter.
In this embodiment, the wireless audio device further includes an audio playing processing module 513, configured to adjust a zero offset of the IMU module 512 according to a distance between the first and second speaker devices and the fixed UWB device, so as to determine the motion parameter detected by the IMU. The distance UWB distance 1 between the first loudspeaker device and said fixed UWB device is measured by the UWB module 511 of the first loudspeaker device 51 itself, whereas the distance UWB distance 2 between the second loudspeaker device and said fixed UWB device is transmitted by the second loudspeaker device 52 to the first loudspeaker device 51 through the communication connection 54.
The audio playing processing module 513 in this embodiment may be the audio playing processing module disclosed in the embodiment shown in fig. 4.
In summary, the embodiments of the present invention disclose an audio playing processing method and apparatus for a wireless audio device, and a wireless audio device, in which a UWB module included in the wireless audio device is used to measure a distance from another UWB device in a stationary state to calibrate a zero offset of an IMU, so as to improve the measurement accuracy of the IMU, and to make the adjustment of the position and the orientation of the sound source more accurate.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (11)

1. An audio playing processing method of a wireless audio device, wherein the wireless audio device comprises a first speaker device and a second speaker device, and at least one speaker device comprises an IMU, the method comprising:
detecting distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device, and establishing a constraint relation between the distances and zero offset of the IMU;
obtaining a plurality of constraint relations through a plurality of measurements, and obtaining the zero offset of the IMU from the plurality of constraint relations;
and determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
2. The audio playback processing method of claim 1, wherein said detecting a distance between said first speaker device and said second speaker device and a fixed UWB device comprises:
acquiring the air transmission time of the UWB signals between the first loudspeaker equipment and the fixed UWB equipment and the second loudspeaker equipment and the fixed UWB equipment by a unilateral two-way or bilateral two-way measuring method, and calculating the distance between the first loudspeaker equipment and the fixed UWB equipment and the distance between the second loudspeaker equipment and the fixed UWB equipment according to the air transmission time.
3. The audio playback processing method of claim 1, wherein obtaining a plurality of constraint relationships by a plurality of measurements, and obtaining a zero offset of the IMU from the plurality of constraint relationships comprises:
obtaining a plurality of constraint relations of zero offset of the IMU through a plurality of measurements;
and obtaining the zero offset of the IMU from the plurality of constraint relations by using a least square method or an interpolation fitting method.
4. The audio playing processing method of claim 1, wherein if only the first speaker device includes an IMU, the first speaker device further transmits the motion parameter detected by the IMU to the second speaker device, so that the second speaker device adjusts and plays the audio signal to be played based on the motion parameter.
5. The audio playback processing method according to claim 1, wherein the fixed UWB device includes a smart phone, a smart wearable device, a notebook, an IPAD, and a UWB beacon.
6. The audio playback processing method of claim 1, wherein the IMU zero offset comprises an accelerometer measured acceleration zero offset and/or a gyroscope measured angular velocity zero offset.
7. The audio playback processing method of claim 1, wherein the motion parameters detected by the IMU include any one or more of angular velocity, acceleration, displacement, position, and orientation.
8. The audio playback processing method according to any one of claims 1 to 7, wherein the first speaker device and the second speaker device adjust and play back the audio signal to be played back based on the motion parameter, and the method includes:
determining a position and/or orientation of each of the first and second loudspeaker devices relative to a sound source based on the motion parameters;
selecting a filter coefficient corresponding to a closest position and/or orientation relative to the sound source in a predetermined filter list based on the determined position and/or orientation of each loudspeaker device relative to the sound source;
performing a filtering process on the audio signals to be played by the respective speaker devices using the selected filter coefficients.
9. The audio playback processing method of claim 8, wherein the filter list is predetermined by:
pre-measuring head related transformation functions for different positions and/or orientations relative to the sound source;
determining the corresponding filter coefficients based on the pre-measured head-related transform functions of the different positions and/or orientations;
storing the filter coefficients in association with the position and/or orientation, thereby constructing the filter list.
10. An audio playing processing apparatus of a wireless audio device, the wireless audio device including a first speaker device and a second speaker device, and at least one speaker device including an IMU therein, the audio playing processing apparatus comprising:
the detection module is used for detecting the distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device and establishing a constraint relation between the distances and the zero offset of the IMU;
the determining module is used for obtaining a plurality of constraint relations through a plurality of times of measurement, and obtaining the zero offset of the IMU from the constraint relations;
and the adjusting module is used for determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
11. A wireless audio device comprising a first speaker device and a second speaker device, wherein the first speaker device and the second speaker device each comprise a UWB module;
the UWB module is configured to acquire air transmission time of UWB signals among the first loudspeaker device, the second loudspeaker device and the fixed UWB device, and then respectively calculate distances between the first loudspeaker device and the fixed UWB device and between the second loudspeaker device and the fixed UWB device according to the air transmission time;
the first loudspeaker device or/and the second loudspeaker device comprises an IMU (inertial measurement Unit) which is used for detecting the motion parameters of the first loudspeaker device or/and the second loudspeaker device;
the wireless audio equipment further comprises an audio playing processing module, which is used for adjusting the zero offset of the IMU according to the distance between the first loudspeaker equipment and the fixed UWB equipment, which is obtained by the UWB module, and the second loudspeaker equipment and the fixed UWB equipment, so as to determine the motion parameters detected by the IMU;
wherein, the audio playing processing module comprises:
the detection module is used for detecting the distances between the first loudspeaker device and the second loudspeaker device and a fixed UWB device and establishing a constraint relation between the distances and the zero offset of the IMU;
the determining module is used for obtaining a plurality of constraint relations through a plurality of times of measurement, and obtaining the zero offset of the IMU from the constraint relations;
and the adjusting module is used for determining the motion parameters detected by the IMU according to the zero offset of the IMU, so that the first loudspeaker device and the second loudspeaker device adjust and play the audio signals to be played based on the motion parameters.
CN202210366527.0A 2021-04-09 2022-04-08 Audio playing processing method and device of wireless audio equipment and wireless audio equipment Active CN114543844B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110384660 2021-04-09
CN2021103846604 2021-04-09

Publications (2)

Publication Number Publication Date
CN114543844A true CN114543844A (en) 2022-05-27
CN114543844B CN114543844B (en) 2024-05-03

Family

ID=81666011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210366527.0A Active CN114543844B (en) 2021-04-09 2022-04-08 Audio playing processing method and device of wireless audio equipment and wireless audio equipment

Country Status (1)

Country Link
CN (1) CN114543844B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014150961A1 (en) * 2013-03-15 2014-09-25 Jointvue, Llc Motion tracking system with inertial-based sensing units
US20160139241A1 (en) * 2014-11-19 2016-05-19 Yahoo! Inc. System and method for 3d tracking for ad-hoc cross-device interaction
US20160363992A1 (en) * 2015-06-15 2016-12-15 Harman International Industries, Inc. Passive magentic head tracker
US20170257724A1 (en) * 2016-03-03 2017-09-07 Mach 1, Corp. Applications and format for immersive spatial sound
US20190094955A1 (en) * 2017-09-27 2019-03-28 Apple Inc. Range finding and accessory tracking for head-mounted display systems
US20190113966A1 (en) * 2017-10-17 2019-04-18 Logitech Europe S.A. Input device for ar/vr applications
CN110023884A (en) * 2016-11-25 2019-07-16 森索里克斯股份公司 Wearable motion tracking system
WO2019239365A1 (en) * 2018-06-13 2019-12-19 Purohit Ankit System and method for position and orientation tracking of multiple mobile devices
WO2020087041A1 (en) * 2018-10-26 2020-04-30 Magic Leap, Inc. Mixed reality device tracking
CN111142665A (en) * 2019-12-27 2020-05-12 恒玄科技(上海)股份有限公司 Stereo processing method and system of earphone assembly and earphone assembly
CN111713121A (en) * 2018-02-15 2020-09-25 奇跃公司 Dual listener position for mixed reality
CN112146678A (en) * 2019-06-27 2020-12-29 华为技术有限公司 Method for determining calibration parameters and electronic equipment
US20210067896A1 (en) * 2019-08-27 2021-03-04 Daniel P. Anagnos Head-Tracking Methodology for Headphones and Headsets
CN112612444A (en) * 2020-12-28 2021-04-06 南京紫牛软件科技有限公司 Sound source position positioning method, sound source position positioning device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014150961A1 (en) * 2013-03-15 2014-09-25 Jointvue, Llc Motion tracking system with inertial-based sensing units
US20160139241A1 (en) * 2014-11-19 2016-05-19 Yahoo! Inc. System and method for 3d tracking for ad-hoc cross-device interaction
US20160363992A1 (en) * 2015-06-15 2016-12-15 Harman International Industries, Inc. Passive magentic head tracker
US20170257724A1 (en) * 2016-03-03 2017-09-07 Mach 1, Corp. Applications and format for immersive spatial sound
CN110023884A (en) * 2016-11-25 2019-07-16 森索里克斯股份公司 Wearable motion tracking system
US20190094955A1 (en) * 2017-09-27 2019-03-28 Apple Inc. Range finding and accessory tracking for head-mounted display systems
US20190113966A1 (en) * 2017-10-17 2019-04-18 Logitech Europe S.A. Input device for ar/vr applications
CN111713121A (en) * 2018-02-15 2020-09-25 奇跃公司 Dual listener position for mixed reality
WO2019239365A1 (en) * 2018-06-13 2019-12-19 Purohit Ankit System and method for position and orientation tracking of multiple mobile devices
WO2020087041A1 (en) * 2018-10-26 2020-04-30 Magic Leap, Inc. Mixed reality device tracking
CN112146678A (en) * 2019-06-27 2020-12-29 华为技术有限公司 Method for determining calibration parameters and electronic equipment
US20210067896A1 (en) * 2019-08-27 2021-03-04 Daniel P. Anagnos Head-Tracking Methodology for Headphones and Headsets
CN111142665A (en) * 2019-12-27 2020-05-12 恒玄科技(上海)股份有限公司 Stereo processing method and system of earphone assembly and earphone assembly
CN112612444A (en) * 2020-12-28 2021-04-06 南京紫牛软件科技有限公司 Sound source position positioning method, sound source position positioning device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘金宝: "UWB传感器的室内定位应用", 《传感器世界》, pages 18 - 20 *

Also Published As

Publication number Publication date
CN114543844B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
US10397728B2 (en) Differential headtracking apparatus
US10939225B2 (en) Calibrating listening devices
US8472653B2 (en) Sound processing apparatus, sound image localized position adjustment method, video processing apparatus, and video processing method
CN105263075B (en) A kind of band aspect sensor earphone and its 3D sound field restoring method
KR102230645B1 (en) Virtual reality, augmented reality and mixed reality systems with spatialized audio
US20150326963A1 (en) Real-time Control Of An Acoustic Environment
US11647352B2 (en) Head to headset rotation transform estimation for head pose tracking in spatial audio applications
US11586280B2 (en) Head motion prediction for spatial audio applications
JP2015076797A (en) Spatial information presentation device, spatial information presentation method, and spatial information presentation computer
EP4214535A2 (en) Methods and systems for determining position and orientation of a device using acoustic beacons
US11140509B2 (en) Head-tracking methodology for headphones and headsets
CN114543844B (en) Audio playing processing method and device of wireless audio equipment and wireless audio equipment
CN109327794B (en) 3D sound effect processing method and related product
US10735885B1 (en) Managing image audio sources in a virtual acoustic environment
US11356788B2 (en) Spatialized audio rendering for head worn audio device in a vehicle
JP2011188444A (en) Head tracking device and control program
US20240089687A1 (en) Spatial audio adjustment for an audio device
CN113810817B (en) Volume control method and device of wireless earphone and wireless earphone
CN115597627A (en) Attitude correction method, device, chip, equipment and storage medium
CN110740415A (en) Sound effect output device, arithmetic device and sound effect control method thereof
WO2023017622A1 (en) Information processing device, information processing method, and program
US20230209298A1 (en) Hearing device
CN116997770A (en) Calibration of magnetometers
CN114710726A (en) Center positioning method and device of intelligent wearable device and storage medium
TW201928654A (en) Audio signal playing device and audio signal processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant