CN116208907A - Spatial audio processing device, apparatus, method and headphone - Google Patents
Spatial audio processing device, apparatus, method and headphone Download PDFInfo
- Publication number
- CN116208907A CN116208907A CN202310182302.4A CN202310182302A CN116208907A CN 116208907 A CN116208907 A CN 116208907A CN 202310182302 A CN202310182302 A CN 202310182302A CN 116208907 A CN116208907 A CN 116208907A
- Authority
- CN
- China
- Prior art keywords
- sound signal
- signal
- spatial
- target
- target sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 157
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000005236 sound signal Effects 0.000 claims abstract description 303
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 101001010782 Drosophila melanogaster Fez family zinc finger protein erm Proteins 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 239000003990 capacitor Substances 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 33
- 238000012546 transfer Methods 0.000 description 29
- 210000003128 head Anatomy 0.000 description 15
- 238000007654 immersion Methods 0.000 description 11
- 238000012360 testing method Methods 0.000 description 8
- 210000005069 ears Anatomy 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 244000035744 Hura crepitans Species 0.000 description 2
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 210000003454 tympanic membrane Anatomy 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 210000000883 ear external Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/307—Frequency adjustment, e.g. tone control
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Abstract
The embodiment of the disclosure provides a spatial audio processing device, equipment, a method and a headset, wherein the device comprises: the system comprises a spatial audio processing module, a spatial sound signal adjusting module and a spatial sound signal processing module; the spatial audio processing module is used for: acquiring an audio source signal; processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal; the spatial sound signal adjustment module is used for: dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal; the spatial sound signal processing module is used for: filtering the first target sound signal to obtain a second target sound signal; wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
Description
Technical Field
The present disclosure relates to the field of acoustic signal processing technology, and relates to, but is not limited to, a spatial audio processing apparatus, device, method, and headphone.
Background
With the development of 3D immersive sound effects at present, more and more games and movies support immersive sound effects. At present, the 3D immersion sound effect is realized in two modes, one is that a plurality of sound devices are arranged in a plurality of target positions in space, and the 3D immersion sound effect is realized by outputting sound through the plurality of sound devices in a plurality of directions. The other type is based on a spatial audio algorithm, namely, an original sound signal is processed through the existing spatial audio algorithm to obtain virtual sound signals in multiple directions, and the virtual sound signals in multiple directions are output through two speakers respectively positioned at the left ear and the right ear of a person.
However, the first method described above requires a large investment in time costs, labor costs, and money costs, and requires arrangement of a plurality of sound devices in a plurality of target locations in one room; the spatial audio algorithm in the second mode is limited by the computational power and the algorithm logic of the chip, the spatial audio effect is not ideal, in order to improve the spatial audio effect, the former is always dedicated to improving the spatial audio algorithm or equipping the spatial audio algorithm with a chip with stronger computational power, but the spatial audio effect is difficult to improve due to the reasons of high difficulty in improving the spatial audio algorithm, high research and development investment cost, high chip price and the like.
Disclosure of Invention
The embodiment of the disclosure provides a spatial audio processing device, equipment, a method and a headset.
In a first aspect, embodiments of the present disclosure provide a spatial audio processing apparatus, the apparatus comprising: the system comprises a spatial audio processing module, a spatial sound signal adjusting module and a spatial sound signal processing module; the spatial audio processing module is used for: acquiring an audio source signal; processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal; the spatial sound signal adjustment module is used for: dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal; the spatial sound signal processing module is used for: filtering the first target sound signal to obtain a second target sound signal; wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
In the embodiment of the disclosure, instead of directly outputting the spatial sound signal obtained by the spatial audio algorithm, the spatial sound signal is output by using the existing spatial audio algorithm, and the spatial sound signal is adjusted and processed to output the first target sound signal in the full frequency band and the second target sound signal in the high frequency band, so that the deficiency of the spatial audio algorithm and the computing power of the chip is made up through post-processing, and the user experiences a better 3D immersed sound effect.
In some embodiments, the audio source signal comprises a left audio source signal, the spatial sound signal comprises a left spatial sound signal, the first target sound signal comprises a left first target sound signal, and the second target sound signal comprises a left second target sound signal; the spatial audio processing device further comprises a first left speaker and a second left speaker with different working frequencies, wherein the left first target sound signal is output through the first left speaker, and the left second target sound signal is output through the second left speaker.
In some embodiments, the audio source signal comprises a right audio source signal, the spatial sound signal comprises a right spatial sound signal, the first target sound signal comprises a right first target sound signal, and the second target sound signal comprises a right second target sound signal; the spatial audio processing device further comprises a first right speaker and a second right speaker with different working frequencies, wherein the right first target sound signal is output through the first right speaker, and the right second target sound signal is output through the second right speaker.
In a second aspect, embodiments of the present disclosure provide a spatial audio processing method, the method including: acquiring an audio source signal; processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal; dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal; filtering the first target sound signal to obtain a second target sound signal; wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
In a third aspect, embodiments of the present disclosure provide a headphone including the spatial audio processing device of any one of the embodiments above.
In a fourth aspect, embodiments of the present disclosure provide a spatial audio processing apparatus, the apparatus comprising: at least two output devices, each of the output devices comprising a first processing path, a second processing path, two speakers; wherein: the first processing path is used for dynamically adjusting the frequency response and the amplitude of the space sound signal to obtain a first target sound signal, and outputting the first target sound signal to a loudspeaker connected with the first processing path for playing; the space sound signal is obtained by processing the acquired audio source signal by using a space audio algorithm; the second processing path is used for filtering the first target sound signal to obtain a second target sound signal, and outputting the second target sound signal to a loudspeaker connected with the second processing path for playing; wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal with a preset frequency or more; in each of the output devices, the first target sound signal and the second target sound signal belong to different target frequency bands, and the operating frequency band of each of the speakers includes a target frequency band of the output target sound signal, wherein the target sound signal includes the first target sound signal or the second target sound signal.
In the embodiment of the disclosure, the spatial audio processing device includes at least two output devices, each output device includes a first processing path, a second processing path and two speakers, that is, the spatial audio processing device includes 2 processing paths and two speakers connected with each processing path, so that the problems that a single speaker restores the limitation of the immersed sound effect through an algorithm and partial crowd caused by the single speaker may not be accurately identified in front-back directions, and more people cannot be matched can be solved, the tolerance of the spatial audio effect is improved, and each user using the spatial audio processing device can hear better effects than before, so that the audience crowd is wider.
Drawings
In the drawings (which are not necessarily drawn to scale), like numerals may describe similar components in different views. Like reference numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example and not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic diagram of a composition structure of a spatial audio processing device according to an embodiment of the disclosure;
Fig. 2 is a schematic diagram of a composition structure of another spatial audio processing device according to an embodiment of the disclosure;
fig. 3 is a schematic implementation flow chart of a spatial audio processing method according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a composition structure of an earphone according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of frequency response curves of two speakers according to an embodiment of the disclosure;
fig. 6 is a flowchart illustrating an effect confirmation of a spatial audio processing device according to an embodiment of the present disclosure;
fig. 7 is a schematic layout diagram of a 7.1 channel multi-surround speaker according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the specific embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present disclosure. However, it will be apparent to one skilled in the art that the present disclosure may be practiced without one or more of these details. In other instances, well-known features have not been described in order to avoid obscuring the present disclosure; that is, not all features of an actual implementation are described in detail herein, and well-known functions and constructions are not described in detail.
In the drawings, the size of layers, regions, elements and their relative sizes may be exaggerated for clarity. Like numbers refer to like elements throughout.
It will be understood that when an element or layer is referred to as being "on" … …, "" adjacent to "… …," "connected to" or "coupled to" another element or layer, it can be directly on, adjacent to, connected to or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being "directly on" … …, "" directly adjacent to "… …," "directly connected to" or "directly coupled to" another element or layer, there are no intervening elements or layers present. It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present disclosure. When a second element, component, region, layer or section is discussed, it does not necessarily mean that the first element, component, region, layer or section is present in the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
Before introducing embodiments of the present disclosure, first, the following head-related transfer functions are explained:
head related transfer function (Head Related Transfer Function, HRTF): is the transfer function of acoustic signals at the free field to the tympanic membrane, which is a function of distance, azimuth angle, elevation angle, frequency and subject as independent variables.
Next, a description is given of a method for improving accuracy of an algorithm for simulating an immersion sound effect in the related art, which generally includes the following three methods:
1) And (3) mobile phone radar modeling scanning fitting: the mobile phone is scanned by the infrared radar, and the obtained model of the human ear is compared with a preset model, so that the accuracy is good, the support of specific mobile phone equipment is required, and the universality is poor.
2) And (5) photographing, scanning and fitting the human ears: compared with the existing model, the mobile phone is matched with the data through mobile phone photographing, uploading and identification, and the mobile phone support is required as the photo does not contain 3-dimensional information of the earphone and the accuracy is general.
3) The head transfer function measurement is carried out through a loudspeaker and a microphone (mic), the measurement result is calculated or matched with a model library, and effective data are difficult to obtain due to high environmental requirements in the measurement process.
The above method for improving the accuracy of the algorithm for simulating the immersion sound effect cannot improve the problem of the single-speaker virtual immersion sound effect sound source azimuth error caused by different shapes and sizes of human ears. It is therefore desirable to provide a spatial audio processing apparatus, device, method and headphone that can ameliorate the above problems.
In view of this, an embodiment of the disclosure provides a spatial audio processing device, referring to fig. 1, the spatial audio processing device includes: a spatial audio processing module 100, a spatial sound signal adjustment module 200, and a spatial sound signal processing module 300; wherein:
The spatial audio processing module 100 is configured to: acquiring an audio source signal; processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal;
the spatial sound signal adjustment module 200 is configured to: dynamically adjusting the frequency response and the amplitude of the space sound signal to obtain a first target sound signal;
the spatial sound signal processing module 300 is configured to: filtering the first target sound signal to obtain a second target sound signal;
wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
Here, the audio source signal may include a left audio source signal (music source L), a right audio source signal (music source R), and both left and right audio source signals.
The process of processing the audio source signal by using the spatial audio algorithm to obtain the spatial sound signal is as follows: upmixing (Upmixing) the left and right audio source signals corresponds to adding a reverberation to obtain a virtual multi-channel signal (e.g., a 7.1 channel signal, wherein the 7.1 channel signal includes a left channel signal, a right channel signal, a left surround channel signal, a right surround channel signal, a left rear surround channel signal, a right rear surround channel signal, a center channel signal, and a low frequency effect channel signal that is not a convolutionally targeted channel signal for a head related transfer function). Each channel signal except the low-frequency effect channel signal in the virtual multi-channel signal is deconvolved with the HRTF when being output, namely, each channel signal in the virtual multi-channel signal is processed by the HRTF, so that a space sound signal can be obtained. The space sound signals are divided into two groups according to the left and right directions, and a left space sound signal and a right space sound signal can be obtained.
In practice, if the resulting virtual multi-channel signal is a 7.1 channel signal, then the resulting left and right spatial sound signals each comprise 5 sound signals; if the resulting virtual multi-channel signal is a 5.1 channel signal, then the resulting left and right spatial sound signals each comprise 3 sound signals.
Based on the above description, the left spatial sound signal and the right spatial sound signal include at least the following three cases:
case one: the left spatial sound signals comprise a center sound signal (C), a left sound signal (L), a left surround sound signal (LS) and a left rear surround sound signal (LRS), and the right spatial sound signals comprise a center sound signal (C), a right sound signal (R), a right surround sound signal (RS) and a right rear surround sound signal (RRS);
and a second case: the left space sound signal comprises a center sound signal, a left sound signal and a left rear surround sound signal, and the right space sound signal comprises a center sound signal, a right sound signal and a right rear surround sound signal;
and a third case: the left spatial sound signal includes a center sound signal, a left sound signal, and a left surround sound signal, and the right spatial sound signal includes a center sound signal, a right sound signal, and a right surround sound signal.
The preset frequency may be determined from the intersection of the frequency response curves of the speaker playing the high frequency spatial sound signal and the speaker playing the low frequency spatial sound signal.
Dynamic adjustment may refer to: the amplitude of the space sound signal is compressed to a certain value in an equal ratio (the output power is not overloaded), so that the damage to a loudspeaker (or a loudspeaker) for playing the space sound signal later is prevented.
In implementation, each module included in the spatial audio processing apparatus may be implemented by a processor in the computer device; of course, the method can also be realized by a specific logic circuit; in practice, the processor may be a central processing unit (Central Processing Unit, CPU), microprocessor (Microprocessor Unit, MPU), digital signal processor (Digital Signal Processor, DSP) or field programmable gate array (Field Programmable Gate Array, FPGA), etc.
In the embodiment of the disclosure, a spatial audio processing module, a spatial sound signal adjusting module and a spatial sound signal processing module, wherein the spatial audio processing module can acquire an audio source signal and process the audio source signal by using a spatial audio algorithm to acquire the spatial sound signal; the space sound signal adjusting module can dynamically adjust the space sound signal to obtain a first target sound signal; the spatial sound signal processing module can filter the first target sound signal to obtain a second target sound signal.
Obviously, in this embodiment, instead of directly outputting the spatial sound signal obtained by the spatial audio algorithm, the spatial sound signal is output by using the existing spatial audio algorithm, and the spatial sound signal is adjusted and processed to output the first target sound signal in the full frequency band and the second target sound signal in the high frequency band, so that the deficiency of the spatial audio algorithm and the computing power of the chip is compensated by post-processing, and the user experiences a better 3D immersed sound effect.
In some embodiments, the audio source signal comprises a left audio source signal, the spatial sound signal comprises a left spatial sound signal, the first target sound signal comprises a left first target sound signal, and the second target sound signal comprises a left second target sound signal;
the spatial audio processing device further comprises a first left speaker and a second left speaker with different working frequencies, wherein the left first target sound signal is output through the first left speaker, and the left second target sound signal is output through the second left speaker.
The first left speaker may be a low frequency speaker (wofter), i.e. a large horn, which may be a 40 millimeter (mm) speaker. The second left speaker may be a tweeter (tweeter), i.e. a small horn, which may be a 10mm speaker.
In the embodiment of the disclosure, the spatial audio processing device comprises a first left speaker for outputting a left first target sound signal and a second left speaker for outputting a left second target sound signal, so that game sound effects such as sandboxes with direction sense requirements can be improved, and the spatial immersion sound effect is realized.
In some embodiments, the audio source signal comprises a right audio source signal, the spatial sound signal comprises a right spatial sound signal, the first target sound signal comprises a right first target sound signal, and the second target sound signal comprises a right second target sound signal;
the spatial audio processing device further comprises a first right speaker and a second right speaker with different working frequencies, wherein the right first target sound signal is output through the first right speaker, and the right second target sound signal is output through the second right speaker.
The first right speaker may be a low frequency speaker, i.e. a large horn, and may be a 40mm speaker. The second right speaker may be a tweeter, i.e. a small horn, and may be a 10mm speaker.
In practice, the first left speaker may be the same or different from the operating frequency of the first right speaker; the second left speaker may be the same or different than the second right speaker in operating frequency, which embodiments of the present disclosure are not limited.
In the embodiment of the disclosure, the spatial audio processing device comprises the first right speaker for outputting the right first target sound signal and the second right speaker for outputting the right second target sound signal, so that game sound effects such as sandboxes with direction sense requirements can be improved, and the spatial immersion sound effect is realized.
It is understood that the first target sound signal and the second target sound signal in different frequency bands are output by the plurality of speakers, respectively. For example, 4 speakers in this scheme output respectively are in left and right first target sound signal and left and right second target sound signal of different frequency ranges, and there is certain space distance between 4 speakers, can let the user have better immersion sound experience through this space distance.
In some embodiments, the spatial audio processing device may include a first left speaker and a second left speaker, and a first right speaker and a second right speaker.
The spatial audio processing device in the embodiment of the present disclosure is described below with reference to fig. 2. Referring to fig. 2, the spatial audio processing apparatus includes: a spatial audio processing module 100, a spatial sound signal adjusting module 200, a spatial sound signal processing module 300, a first left speaker 1, a second left speaker 2, a first right speaker 3 and a second right speaker 4. Wherein:
The spatial audio processing module 100 may include a DSP for: acquiring an audio source signal; the audio source signal is processed using a spatial audio algorithm to obtain spatial sound signals including a left spatial sound signal (including C, L, LS and RLS) and a right spatial sound signal (including C, R, RS and RRS).
The spatial sound signal adjustment module 200 may include an Equalizer (EQ) and a dynamic range controller (Dynamic Range Control, DRC), wherein:
EQ is used to: frequency response adjustment is carried out on the left space sound signal, and the left space sound signal with the frequency response adjusted is obtained; and carrying out frequency response adjustment on the right space sound signal to obtain the right space sound signal with the adjusted frequency response.
DRC is used to: amplitude adjustment is carried out on the left space sound signal with the frequency response adjusted, and the left space sound signal with the amplitude adjusted is obtained; and performing amplitude adjustment on the right spatial sound signal with the frequency response adjusted to obtain the right spatial sound signal with the amplitude adjusted.
The spatial sound signal adjustment module 200 may further include a digital-to-analog converter (Digital to Analog Converter, DAC) for: performing digital-to-analog conversion on the left space sound signal with the amplitude value adjusted to obtain a left first target sound signal; and D, performing digital-to-analog conversion on the right space sound signal with the amplitude value adjusted to obtain a right first target sound signal.
The spatial sound signal processing module 300 may include an RC filter for: filtering the left first target sound signal to obtain a left second target sound signal; and filtering the right first target sound signal to obtain a right second target sound signal.
A first left speaker 1 for outputting a left first target sound signal of a full frequency band;
a second left speaker 2 for outputting a left second target sound signal of a high frequency band;
a first right speaker 3 for outputting a right first target sound signal of a full frequency band;
and a second right speaker 4 for outputting a right second target sound signal of the high frequency band.
The embodiment of the disclosure also provides a spatial audio processing method, referring to fig. 3, the method includes the following steps:
step S301, an audio source signal is acquired;
step S302, processing an audio source signal by using a spatial audio algorithm to obtain a spatial sound signal;
step S303, dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal;
step S304, filtering the first target sound signal to obtain a second target sound signal.
In practice, step S301 and step S302 may be implemented by a spatial audio processing module; step S303 may be implemented by a spatial sound signal adjustment module; step S304 may be implemented by a spatial sound signal processing module.
The description of the method embodiments above is similar to that of the apparatus embodiments above, with similar benefits as the apparatus embodiments. For technical details not disclosed in the method embodiments of the present disclosure, please refer to the description of the apparatus embodiments of the present disclosure for understanding.
The embodiment of the disclosure also provides a headphone, which comprises the spatial audio processing device in any embodiment.
In the embodiment of the disclosure, the 4 speakers respectively output the left and right first target sound signals and the left and right second target sound signals in different frequency bands, and a certain spatial distance exists between the 4 speakers, so that a user can have better immersion sound experience sense through the spatial distance.
The disclosed embodiments also provide a spatial audio processing apparatus, which includes at least two output devices, only two of which are shown in fig. 2, namely a first output device 101 and a second output device 102, each of which includes a first processing path, a second processing path, and two speakers; wherein:
the first processing path is used for dynamically adjusting the frequency response and the amplitude of the space sound signal to obtain a first target sound signal, and outputting the first target sound signal to a loudspeaker connected with the first processing path for playing; the spatial sound signal is obtained by processing the acquired audio source signal by using a spatial audio algorithm.
For example, the first processing path 11 in the first output device 101 is configured to dynamically adjust the frequency response and the amplitude of the left spatial sound signal 31 obtained by processing the acquired audio source signal 20 by using a spatial audio algorithm, obtain a left first target sound signal, and output the left first target sound signal to the speaker 1 connected to the first processing path 11 for playing.
The first processing path 21 in the second output device 102 is configured to dynamically adjust the frequency response and the amplitude of the right spatial sound signal 32 obtained by processing the acquired audio source signal 20 by using a spatial audio algorithm, obtain a right first target sound signal, and output the right first target sound signal to the speaker 3 connected to the first processing path 21 for playing.
And the second processing path is used for filtering the first target sound signal to obtain a second target sound signal, and outputting the second target sound signal to a loudspeaker connected with the second processing path for playing.
For example, the second processing path 12 in the first output device 101 is configured to filter the first target acoustic signal to obtain a second target acoustic signal, and output the second target acoustic signal to the speaker 2 connected to the second processing path 12 for playing.
The second processing path 22 in the second output device 102 is configured to filter the first target acoustic signal to obtain a second target acoustic signal, and output the second target acoustic signal to the speaker 4 connected to the second processing path 22 for playing.
In the first output device, the left first target sound signal and the left second target sound signal belong to different target frequency bands, and the operating frequency band of the first left speaker includes the target frequency band of the output left first target sound signal, and the operating frequency band of the second left speaker includes the target frequency band of the output left second target sound signal.
In the second output device, the right first target sound signal and the right second target sound signal belong to different target frequency bands, and the operating frequency band of the first right speaker includes the target frequency band of the output right first target sound signal, and the operating frequency band of the second right speaker includes the target frequency band of the output right second target sound signal.
It should be noted that, referring to fig. 2, the audio source signal 20 includes a left audio source signal 201 and a right audio source signal 202.
The speakers 2 and 4 may be tweeters; speaker 1 and speaker 3 may be low frequency speakers. The speaker 1 in the first output device may be the same as or different from the speaker 3 in the second output device; the speaker 2 in the first output device may be the same as or different from the speaker 4 in the second output device. In the embodiments of the present disclosure, the size and type of the speaker may be adjusted according to the actual product, including but not limited to moving iron and planar diaphragms.
In the embodiment of the disclosure, the spatial audio processing device includes at least two output devices, each output device includes a first processing path, a second processing path and two speakers, that is, the spatial audio processing device includes 2 processing paths and two speakers connected with each processing path, so that the problems that a single speaker restores the limitation of the immersed sound effect through an algorithm and partial crowd caused by the single speaker may not be accurately identified in front-back directions, and more people cannot be matched can be solved, the tolerance of the spatial audio effect is improved, and each user using the spatial audio processing device can hear better effects than before, so that the audience crowd is wider.
In some embodiments, the second processing path includes an RC filter; wherein: the RC filter comprises a resistor and a capacitor which are connected in parallel; and the RC filter is used for filtering the first target sound signal to obtain a second target sound signal.
With continued reference to fig. 2, the second processing path 12 in the first output device 101 includes a first processing path 11 and an RC filter 10; the second processing path 22 in the second output means 102 comprises the first processing path 21 and the RC filter 20.
In implementation, the RC filter 10 of the second processing path 12 in the first output device 101 is configured to filter the left first target acoustic signal to obtain a left second target acoustic signal; the RC filter 20 of the second processing path 22 in the second output device 102 is configured to filter the right first target acoustic signal to obtain a right second target acoustic signal.
It should be noted that, the magnitudes of the capacitance and the resistance in the RC filter may be determined based on the high frequency ranges output by the speaker 2 and the speaker 4, and the frequency response curves of the speaker 1 (i.e., the low frequency speaker) and the speaker 2 (i.e., the high frequency speaker) are as shown in fig. 5, so that it can be seen that the intersection point of the frequency response curves of the two speakers is point a, the frequency corresponding to the point a is X hertz (Hz), and if the high frequency ranges output by the speaker 2 and the speaker 4 exceed X hertz, the resistance value and the capacitance value may be determined according to the intersection point of the frequency response curves of the two speakers is point a.
In implementation, the first target acoustic signal in the first output device and the first target acoustic signal in the second output device belong to different target frequency bands; the second target sound signal in the first output device and the second target sound signal in the second output device belong to different target frequency bands;
Or, a first target sound signal in two paths of target sound signals in the two output devices belongs to a first target frequency band, and a second target sound signal in the other two paths of target sound signals belongs to a second target frequency band.
In practice, referring to both fig. 2 and fig. 4, the spatial audio processing device may be a headset, such as a headphone game headset, with two speakers of the first output device 101 of the two output devices being located at the left ear cup 103 of the headset and two speakers of the second output device 102 of the two output devices being located at the right ear cup 104 of the headset. That is, the left ear cup 103 of the earphone includes a first left speaker (woofer L) that is speaker 1 and a second left speaker (tweeter L) that is speaker 2, and the right ear cup 104 of the earphone includes a first right speaker (woofer R) that is speaker 3 and a second right speaker (tweeter R) that is speaker 4. As can be seen in fig. 4, the left ear cup 103 of the headset comprises two speakers of the same output device, speaker 1 and speaker 2, and the right ear cup 104 of the headset comprises two speakers of the same output device, speaker 3 and speaker 4.
Therefore, the left ear muff and the right ear muff are respectively provided with the loudspeaker which is responsible for high-frequency response and the loudspeaker which is responsible for low-frequency response, so that the effects of low frequency and high frequency can be improved to a certain extent.
In some embodiments, where the device is a headset, the first target acoustic signal comprises a left first target acoustic signal and a right first target acoustic signal, and the second target acoustic signal comprises a left second target acoustic signal and a right second target acoustic signal; the first left loudspeaker is used for playing the left first target sound signal, and the second left loudspeaker is used for playing the left second target sound signal; the first right speaker is used for playing a right first target sound signal, and the second right speaker is used for playing a right second target sound signal.
The first left loudspeaker is positioned at the center of the left earmuff, and the second left loudspeaker is positioned at one side of the face shell in the left earmuff; the first right speaker is positioned in the center of the right earmuff; the second right speaker is located on one side of the face shell in the right ear cup. After the earphone is worn, the directions of central axes of the second left speaker and the second right speaker (namely, the vertical direction of the speakers) are the same as the direction of the auditory canal.
In other words, the low-frequency speaker is located at the center of the ear cover shell, the high-frequency speaker is located at one side of the ear cover shell, and the vertical direction of the high-frequency speaker after wearing coincides with the direction of the ear canal. That is, the position of the 40mm low-frequency speaker is right left right direction, and the 10mm high-frequency speaker is positioned at the left front/right front of the human ear.
In practice, the face shell of the earmuff may be a plastic support to which the speaker is secured. In practice, the speaker and the plastic bracket can be sealed by glue, thereby ensuring that low frequency does not leak.
In the embodiment of the disclosure, the speakers of the earphone are expanded into the double-unit array from a single speaker, and the problem of single speaker virtual immersion sound effect sound source azimuth errors caused by different shapes and sizes of human ears is solved by arranging the two speakers in different directions of ears, so that the immersion sound effect is improved, no peripheral equipment is required to be used for calibration and correction, and the test and calibration steps are reduced.
In some embodiments, with continued reference to fig. 2, the spatial audio processing apparatus includes two output devices, the spatial audio processing apparatus further comprising: a digital signal processor 30, configured to process a left audio source signal 201 and a right audio source signal 202 in the audio source signal 20 by using a spatial audio algorithm, so as to obtain a left spatial sound signal 31 and a right spatial sound signal 32;
wherein the left spatial sound signal 31 is transmitted to a first one of the two output devices 101 and the right spatial sound signal 32 is transmitted to a second one of the two output devices 102.
In some embodiments, each processing path includes an equalizer, a dynamic range controller, and a digital-to-analog converter in series.
Referring to fig. 2, the processing paths 11 each include an equalizer 111, a dynamic range controller 112, and a digital-to-analog converter 113, which are sequentially connected in series; the processing paths 21 each include an equalizer 211, a dynamic range controller 212, and a digital-to-analog converter 213, in series.
The equalizer is used for performing frequency response adjustment on the spatial sound signals in the processing channel to obtain spatial sound signals with the adjusted frequency response, namely adjusting the frequency response of the left spatial sound signal and the right spatial sound signal to a certain range;
the dynamic range controller is used for carrying out amplitude adjustment on the spatial sound signal after frequency response adjustment to obtain the spatial sound signal after amplitude adjustment, namely compressing the amplitude equal ratio of a large digital signal (sound effect) to a certain value (the output power is not overloaded) so as to prevent the large digital signal (sound effect) after EQ processing from damaging a loudspeaker (or a loudspeaker);
the digital-to-analog converter is used for performing digital-to-analog conversion on the space acoustic signal with the amplitude adjusted to obtain a first target acoustic signal, namely converting the digital signal into an analog signal (analog current or analog voltage), so as to push the loudspeaker to work.
In the embodiments of the present disclosure, each processing path includes a simple device, so that the constituent structure of the spatial audio processing apparatus can be simplified.
In implementation, the first processing path 11 and the second processing path 12 in the first one of the two output devices 101 share an equalizer 111, a dynamic range controller 112, and a digital-to-analog converter 113; the equalizer 211, dynamic range controller 212, and digital-to-analog converter 213 are shared by the first processing path 21 and the second processing path 22 in the second one of the two output devices 102. This saves layout area, thereby reducing the volume of the spatial audio processing device.
In the embodiment of the disclosure, the spatial audio processing device is taken as an earphone, and the speaker 1 in the first output device and the speaker 3 in the second output device are both low-frequency speakers, and the speaker 2 in the first output device and the speaker 4 in the second output device are both high-frequency speakers for illustration. The fixed distance L1 exists between the two loudspeakers in the earmuff (the distance between the centers of the diaphragms of the two loudspeakers is L1), the distance between the loudspeaker with the distance of 10mm and the human ear is L2, and the distance between the loudspeaker with the distance of 40mm and the human ear is L3, and then the phase difference at the fixed frequency Hx is expressed as a formula (1):
Pha= ((L3-L2) ×Hx×360 °)/V formula (1);
wherein v=340 meters per second (m/s), so the phase difference is constant when the two speakers are at a constant distance from the human ear.
At present, due to the shape and size difference of human ears, the time difference and the phase difference of fitting effect provided by the immersion sound virtual algorithm are different from the actual situation, so that partial people can have position offset, in the embodiment of the disclosure, two speakers with fixed phase difference and fixed direction exist, the virtual algorithm can be corrected according to the existing direction (the angle right in front of the human head is assumed to be 0 degree (°), the angle is increased anticlockwise, the 40mm speaker is located at the 90 DEG position, the 10mm speaker is assumed to be located at the angle X (0 < X < 90 DEG), and the ideal left channel is preset to be 30 deg.
The main factors of the known human ear through judging the sound source are time difference and loudness difference, wherein the time difference is equivalent to phase difference: since the phase phaj= (lxhx 360 °)/V at the fixed frequency Hx, L is the distance of the sound source from the human ear. Therefore, the purposes of adjusting the loudness difference and the time difference can be achieved by adjusting the amplitude or the phase of the left and right sound source signals in the DSP, the azimuth error of a single-speaker virtual algorithm is reduced (a 10mm high-pitch speaker is positioned in the left front and the right front of the human ear, and the left front and the left rear sound channels are not needed to be virtualized through the virtual algorithm, so that the front and the rear azimuth error caused by the human ear difference is reduced), the azimuth sense of two speakers can be greatly improved when the delay and the phase processing are carried out, and the immersion sound effect is greatly facilitated.
In a practical spatial audio processing device application, referring to fig. 6, the following 5 steps may be taken to determine and debug the device effect:
step S1, respectively testing transfer functions from different sound channels to the center human head of the sound field, and recording the transfer functions as a first head related transfer function;
the sound may here comprise loudspeakers, where it is necessary to position a microphone at the tympanic membrane or at the outer ear of the person or artificial head and to play the sound signal at different positions in space, for example using an arrangement of 7 loudspeakers as shown in fig. 7, when testing the first head related transfer function. The speaker positions in fig. 7 are denoted by the same reference numerals C, L, R, LS, RS, RLS and RRS as the 7.1 channel signal. The speakers of the respective channels are located on a circumference centering on the human head, and "C" at the front position of the human head indicates the speaker position of the center channel. L and R are positions at angles of 30 ° and 330 °, representing the positions of the left channel speaker and the right channel speaker, respectively. RLS and RRS are positions at angles between 120 ° and 150 ° and between 210 ° and 240 °, representing the positions of speakers of the left rear surround channel signal and the right rear surround channel signal, respectively.
The step S1 can be used for testing and obtaining the frequency response and the phase between the sound signals sent by 7 loudspeakers and two ears of a person; meanwhile, the frequency response and the phase of the sound signal received by the microphone can be obtained through testing; the first head-related transfer function is obtained by dividing the two frequency responses.
Step S2, respectively testing transfer functions from the speakers (spk) 1 to 4 to the head of a person in the center of the sound field, and recording the transfer functions as a second head related transfer function;
step S2 is to test the second head related transfer function with the loudspeaker number 1-4 playing the sound signal. The number 1 loudspeaker in the number 1-4 loudspeakers is the low-frequency loudspeaker in the left earmuff, the number 2 loudspeaker is the high-frequency loudspeaker in the left earmuff, the number 3 loudspeaker is the low-frequency loudspeaker in the right earmuff, and the number 4 loudspeaker is the high-frequency loudspeaker in the right earmuff.
Step S3, debugging a DSP algorithm according to the difference between the first head related transfer function and the second head related transfer function;
if the first head related transfer function and the second head related transfer function agree, it is stated that the DSP algorithm is successful and efficient. If the first head related transfer function and the second head related transfer function are not identical, i.e. the frequency response or phase is off-set, two parameters in the DSP algorithm, i.e. the delay and the phase, need to be adjusted. Step S3 is to adjust the parameters of the DSP algorithm according to the first head related transfer function and the second head related transfer function.
S4, testing and opening transfer functions from the speakers of algorithm 1-4 to the head of the person in the center of the sound field, and recording the transfer functions as a third head related transfer function;
and S4, a DSP algorithm after the debugging is started, and a transfer function from the No. 1-4 loudspeaker to the center head of the sound field is tested.
Step S5, comparing the parameter differences of the step S1 and the step S4;
here, the two parameters of the frequency response and the phase of the first head related transfer function in step S1 and the third head related transfer function in step S4 are compared, and if the frequency response is within the error range and the phase is within the error range, the debugging is completed.
The description of the method embodiments above is similar to that of the apparatus embodiments above, with similar benefits as the apparatus embodiments. For technical details not disclosed in the method embodiments of the present disclosure, please refer to the description of the apparatus embodiments of the present disclosure for understanding.
In several embodiments provided by the present disclosure, it should be understood that the disclosed apparatus and methods may be implemented in a non-targeted manner. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the components shown or discussed are coupled to each other or directly.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The features disclosed in the several method or apparatus embodiments provided in the present disclosure may be arbitrarily combined without any conflict to obtain new method embodiments or apparatus embodiments.
While the foregoing is directed to embodiments of the present disclosure, the scope of the embodiments of the present disclosure is not limited to the foregoing, and any changes and substitutions that are within the scope of the embodiments of the present disclosure will be readily apparent to those skilled in the art. Therefore, the protection scope of the embodiments of the present disclosure shall be subject to the protection scope of the claims.
Claims (13)
1. A spatial audio processing apparatus, the apparatus comprising: the system comprises a spatial audio processing module, a spatial sound signal adjusting module and a spatial sound signal processing module;
The spatial audio processing module is used for: acquiring an audio source signal; processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal;
the spatial sound signal adjustment module is used for: dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal;
the spatial sound signal processing module is used for: filtering the first target sound signal to obtain a second target sound signal;
wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
2. The spatial audio processing device of claim 1, wherein the audio source signal comprises a left audio source signal, the spatial sound signal comprises a left spatial sound signal, the first target sound signal comprises a left first target sound signal, and the second target sound signal comprises a left second target sound signal;
the spatial audio processing device further comprises a first left speaker and a second left speaker with different working frequencies, wherein the left first target sound signal is output through the first left speaker, and the left second target sound signal is output through the second left speaker.
3. The spatial audio processing device of claim 1 or 2, wherein the audio source signal comprises a right audio source signal, the spatial sound signal comprises a right spatial sound signal, the first target sound signal comprises a right first target sound signal, and the second target sound signal comprises a right second target sound signal;
the spatial audio processing device further comprises a first right speaker and a second right speaker with different working frequencies, wherein the right first target sound signal is output through the first right speaker, and the right second target sound signal is output through the second right speaker.
4. A method of spatial audio processing, the method comprising:
acquiring an audio source signal;
processing the audio source signal by using a spatial audio algorithm to obtain a spatial sound signal;
dynamically adjusting the frequency response and the amplitude of the spatial sound signal to obtain a first target sound signal;
filtering the first target sound signal to obtain a second target sound signal;
wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal of a preset frequency or more.
5. A headphone comprising the spatial audio processing device of any one of claims 1 to 3.
6. A spatial audio processing device, the device comprising:
at least two output devices, each of the output devices comprising a first processing path, a second processing path, two speakers; wherein:
the first processing path is used for dynamically adjusting the frequency response and the amplitude of the space sound signal to obtain a first target sound signal, and outputting the first target sound signal to a loudspeaker connected with the first processing path for playing; the space sound signal is obtained by processing the acquired audio source signal by using a space audio algorithm;
the second processing path is used for filtering the first target sound signal to obtain a second target sound signal, and outputting the second target sound signal to a loudspeaker connected with the second processing path for playing;
wherein the first target acoustic signal is a full frequency acoustic signal and the second target acoustic signal is a high frequency acoustic signal; the high-frequency acoustic signal is an acoustic signal with a preset frequency or more;
in each of the output devices, the first target sound signal and the second target sound signal belong to different target frequency bands, and the operating frequency band of each of the speakers includes a target frequency band of the output target sound signal, wherein the target sound signal includes the first target sound signal or the second target sound signal.
7. The spatial audio processing device of claim 6, wherein the device comprises two output means, the device further comprising:
the digital signal processor is used for processing the left audio source signal and the right audio source signal in the audio source signals by utilizing a spatial audio algorithm to obtain a left spatial sound signal and a right spatial sound signal;
wherein the left spatial sound signal is transmitted to a first one of the two output devices and the right spatial sound signal is transmitted to a second one of the two output devices.
8. The spatial audio processing device of claim 7, wherein each processing path comprises an equalizer, a dynamic range controller, and a digital-to-analog converter in series in sequence; wherein:
the equalizer is used for performing frequency response adjustment on the spatial sound signal in the processing path to obtain a spatial sound signal with the adjusted frequency response;
the dynamic range controller is used for carrying out amplitude adjustment on the spatial sound signal after the frequency response adjustment to obtain the spatial sound signal after the amplitude adjustment;
the digital-to-analog converter is used for performing digital-to-analog conversion on the space sound signal with the amplitude adjusted to obtain a first target sound signal.
9. The spatial audio processing device of claim 6, wherein the second processing path comprises an RC filter; wherein:
the RC filter comprises a resistor and a capacitor which are connected in parallel;
and the RC filter is used for filtering the first target sound signal to obtain a second target sound signal.
10. The spatial audio processing apparatus according to claim 7, wherein the first target acoustic signal in the first output device and the first target acoustic signal in the second output device belong to different target frequency bands; the second target sound signal in the first output device and the second target sound signal in the second output device belong to different target frequency bands;
or, a first target sound signal in two paths of target sound signals in the two output devices belongs to a first target frequency band, and a second target sound signal in the other two paths of target sound signals belongs to a second target frequency band.
11. The spatial audio processing apparatus of claim 8, wherein the first processing path and the second processing path in a first one of the two output devices share an equalizer, a dynamic range controller, and a digital-to-analog converter;
The first processing path and the second processing path in a second one of the two output devices share an equalizer, a dynamic range controller, and a digital-to-analog converter.
12. The spatial audio processing device of claim 7, wherein the spatial audio processing device is a headset, the first left speaker and the second left speaker of the first output device are located at a left earmuff of the headset, and the first right speaker and the second right speaker of the second output device are located at a right earmuff of the headset.
13. The spatial audio processing device of claim 12, wherein the first target sound signal comprises a left first target sound signal and a right first target sound signal, and the second target sound signal comprises a left second target sound signal and a right second target sound signal;
the first left speaker is used for playing the left first target sound signal, and the second left speaker is used for playing the left second target sound signal; the first right speaker is used for playing the right first target sound signal, and the second right speaker is used for playing the right second target sound signal;
the first left loudspeaker is positioned in the center of the left earmuff, and the second left loudspeaker is positioned on one side of the face shell in the left earmuff; the first right speaker is positioned in the center of the right earmuff; the second right speaker is located at one side of the face shell in the right ear muff.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310182302.4A CN116208907A (en) | 2023-02-17 | 2023-02-17 | Spatial audio processing device, apparatus, method and headphone |
PCT/CN2023/110025 WO2024169133A1 (en) | 2023-02-17 | 2023-07-28 | Spatial audio processing apparatus, device, method, and headphones |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310182302.4A CN116208907A (en) | 2023-02-17 | 2023-02-17 | Spatial audio processing device, apparatus, method and headphone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116208907A true CN116208907A (en) | 2023-06-02 |
Family
ID=86507426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310182302.4A Pending CN116208907A (en) | 2023-02-17 | 2023-02-17 | Spatial audio processing device, apparatus, method and headphone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116208907A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117499850A (en) * | 2023-12-26 | 2024-02-02 | 荣耀终端有限公司 | Audio data playing method and electronic equipment |
WO2024169133A1 (en) * | 2023-02-17 | 2024-08-22 | 深圳市倍思科技有限公司 | Spatial audio processing apparatus, device, method, and headphones |
-
2023
- 2023-02-17 CN CN202310182302.4A patent/CN116208907A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024169133A1 (en) * | 2023-02-17 | 2024-08-22 | 深圳市倍思科技有限公司 | Spatial audio processing apparatus, device, method, and headphones |
CN117499850A (en) * | 2023-12-26 | 2024-02-02 | 荣耀终端有限公司 | Audio data playing method and electronic equipment |
CN117499850B (en) * | 2023-12-26 | 2024-05-28 | 荣耀终端有限公司 | Audio data playing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5894634B2 (en) | Determination of HRTF for each individual | |
CN104219604B (en) | Stereo playback method of loudspeaker array | |
CN116208907A (en) | Spatial audio processing device, apparatus, method and headphone | |
US20150110310A1 (en) | Method for reproducing an acoustical sound field | |
US10341799B2 (en) | Impedance matching filters and equalization for headphone surround rendering | |
CN110035376A (en) | Come the acoustic signal processing method and device of ears rendering using phase response feature | |
JP2020506639A (en) | Audio signal processing method and apparatus | |
JPH01192299A (en) | Stereophonic sound collector | |
CN106664499A (en) | Audio signal processing apparatus | |
CN111294724A (en) | Spatial repositioning of multiple audio streams | |
US7921016B2 (en) | Method and device for providing 3D audio work | |
US20120101609A1 (en) | Audio Auditioning Device | |
US11962984B2 (en) | Optimal crosstalk cancellation filter sets generated by using an obstructed field model and methods of use | |
US6990210B2 (en) | System for headphone-like rear channel speaker and the method of the same | |
CN116233730A (en) | Spatial audio processing device, apparatus, method and headphone | |
US11678111B1 (en) | Deep-learning based beam forming synthesis for spatial audio | |
CN109923877B (en) | Apparatus and method for weighting stereo audio signal | |
US11653163B2 (en) | Headphone device for reproducing three-dimensional sound therein, and associated method | |
US7050596B2 (en) | System and headphone-like rear channel speaker and the method of the same | |
CN107172568A (en) | A kind of stereo sound field calibrator (-ter) unit and calibration method | |
CN110312198B (en) | Virtual sound source repositioning method and device for digital cinema | |
US6983054B2 (en) | Means for compensating rear sound effect | |
JPS61245698A (en) | Acoustic characteristic measuring instrument | |
WO2024169133A1 (en) | Spatial audio processing apparatus, device, method, and headphones | |
O’Donovan et al. | Spherical microphone array based immersive audio scene rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |