WO2024069796A1 - 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 - Google Patents
音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 Download PDFInfo
- Publication number
- WO2024069796A1 WO2024069796A1 PCT/JP2022/036165 JP2022036165W WO2024069796A1 WO 2024069796 A1 WO2024069796 A1 WO 2024069796A1 JP 2022036165 W JP2022036165 W JP 2022036165W WO 2024069796 A1 WO2024069796 A1 WO 2024069796A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- unit
- sound source
- data
- stereophonic
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/40—Visual indication of stereophonic sound image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/23—Direction finding using a sum-delay beam-former
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/25—Array processing for suppression of unwanted side-lobes in directivity characteristics, e.g. a blocking matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/03—Application of parametric coding in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/11—Application of ambisonics in stereophonic audio systems
Definitions
- This disclosure relates to a sound space construction device, a sound space construction system, a program, and a sound space construction method.
- Patent Document 1 discloses an apparatus adapted to modify the directional characteristics of captured directional audio in response to spatial data of a microphone system capturing the directional audio. This allows the directional characteristics of the directional audio to be modified in response to a movement of the listening position.
- the purpose of the position or aspects of the present disclosure is to make it possible to reproduce the sound field at a free position while the sound collection device is fixed.
- a sound space construction device includes a sound acquisition unit that acquires sound data including sounds from multiple sound sources, a sound source determination unit that determines multiple sound source positions, which are the positions of the multiple sound sources, from the sound data, a sound extraction unit that generates multiple extracted sound data by extracting sounds indicated by the sound data for each sound source and generating extracted sound data indicating the extracted sounds, a format conversion unit that converts the format of the multiple extracted sound data into a stereophonic format to generate multiple stereophonic sounds corresponding to the multiple sound sources, a position acquisition unit that acquires an auditory position, which is a position at which sounds are heard, a movement processing unit that calculates the angle and distance between the auditory position and each of the multiple sound source positions, an angle and distance adjustment unit that adjusts each of the multiple stereophonic sounds by the angle and distance corresponding to each of the multiple sound source positions, thereby generating multiple adjusted stereophonic sounds, which are multiple stereophonic sounds at the auditory position, and a superimposition unit that superimposes the multiple sound acquisition unit that acquires sound
- a sound space construction system is a sound space construction system including a sound space construction device and a sound collection device connected to the sound space construction device via a network and generating sound data including sounds from multiple sound sources, the sound space construction device including a communication unit that communicates with the sound collection device, a sound acquisition unit that acquires the sound data via the communication unit, a sound source determination unit that determines multiple sound source positions that are the positions of the multiple sound sources from the sound data, and a sound extraction unit that extracts sounds indicated by the sound data for each sound source and generates extracted sound data indicating the extracted sounds, thereby generating multiple extracted sound data.
- a stereophonic sound processing unit that converts the format of the extracted sound data into a stereophonic sound format to generate a plurality of stereophonic sounds corresponding to the plurality of sound sources; a position acquisition unit that acquires an auditory position where sound is heard; a movement processing unit that calculates the angle and distance between the auditory position and each of the plurality of sound source positions; an angle and distance adjustment unit that adjusts each of the plurality of stereophonic sounds by an angle and distance corresponding to each of the plurality of sound source positions to generate a plurality of adjusted stereophonic sounds that are a plurality of stereophonic sounds at the auditory position; and a superimposition unit that superimposes the plurality of adjusted stereophonic sounds.
- the program causes a computer to function as an audio acquisition unit that acquires audio data including audio from multiple sound sources, a sound source determination unit that determines multiple sound source positions, which are the positions of the multiple sound sources, from the audio data, an audio extraction unit that generates multiple extracted audio data by extracting audio represented by the audio data for each sound source and generating extracted audio data representing the extracted audio, a format conversion unit that converts the format of the multiple extracted audio data into a stereophonic format to generate multiple stereophonic sounds corresponding to the multiple sound sources, a position acquisition unit that acquires an auditory position, which is a position at which audio is heard, a movement processing unit that calculates the angle and distance between the auditory position and each of the multiple sound source positions, an angle and distance adjustment unit that generates multiple adjusted stereophonic sounds, which are multiple stereophonic sounds at the auditory position, by adjusting each of the multiple stereophonic sounds with an angle and distance corresponding to each of the multiple sound source positions, and a superimposition unit that superimposes the multiple adjusted
- a sound space construction method includes obtaining audio data including audio from a plurality of sound sources, determining from the audio data a plurality of sound source positions that are the positions of the plurality of sound sources, extracting audio represented by the audio data for each sound source, and generating extracted audio data representing the extracted audio, thereby generating a plurality of extracted audio data, converting the format of the plurality of extracted audio data into a stereophonic format to generate a plurality of stereophonic sounds corresponding to the plurality of sound sources, obtaining an auditory position that is a position at which the audio is heard, calculating an angle and distance between the auditory position and each of the plurality of sound source positions, adjusting each of the plurality of stereophonic sounds by an angle and distance corresponding to each of the plurality of sound source positions, thereby generating a plurality of adjusted stereophonic sounds that are a plurality of stereophonic sounds at the auditory position, and superimposing the plurality of adjusted stereophonic sounds.
- FIG. 1 is a block diagram illustrating a schematic configuration of a sound space construction device according to a first embodiment.
- FIG. 2 is a block diagram illustrating a schematic configuration of a voice extraction unit.
- FIG. 1 is a block diagram showing a schematic configuration of a computer.
- 11 is a first example for explaining a processing example accompanying a movement of an auditory position.
- 13 is a second example for explaining a processing example accompanying a movement of an auditory position.
- 13 is a third example for explaining a processing example accompanying a movement of an auditory position.
- FIG. 11 is a block diagram illustrating a schematic configuration of a sound space construction system according to a second embodiment.
- FIG. 11 is a block diagram illustrating a schematic configuration of a sound collection device according to a second embodiment.
- FIG. 11 is a block diagram illustrating a schematic configuration of a sound space construction device according to a second embodiment.
- FIG. 11 is a block diagram illustrating a configuration of a sound
- FIG. 1 is a block diagram showing a schematic configuration of a sound space construction device 100 according to the first embodiment.
- the sound space construction device 100 includes a voice acquisition unit 101, a sound source determination unit 102, a voice extraction unit 103, a format conversion unit 104, a position acquisition unit 105, a movement processing unit 106, an angle distance adjustment unit 107, a superimposition unit 108, and an output processing unit 109.
- the voice acquisition unit 101 acquires voice data including voices from a plurality of sound sources.
- the voice acquisition unit 101 acquires voice data generated by a sound collection device (not shown) such as a microphone.
- the voice of the voice data is preferably captured by an Ambisonics microphone, which is a microphone compatible with the Ambisonics method, but may be captured by multiple non-directional microphones.
- the voice acquisition unit 101 may acquire voice data from a sound collection device via a connection I/F (InterFace) not shown, or may acquire voice data from a network such as the Internet via a communication I/F not shown.
- the acquired voice data is provided to the sound source determination unit 102.
- the sound source determining unit 102 determines a plurality of sound source positions from the audio data. For example, the sound source determining unit 102 performs sound source number determination for determining the number of sound sources included in the audio data, and sound source position estimation for estimating the sound source position, which is the position of a sound source included in the audio data.
- a publicly known technique may be used to determine the number of sound sources.
- the following document 1 describes a method for estimating the number of sound sources using independent component analysis.
- the sound source determination unit 102 may also identify sound sources and determine the number of sound sources by analyzing an image represented by image data obtained from an imaging device such as a camera (not shown). In other words, the sound source determination unit 102 may determine the positions of multiple sound sources using an image of a space containing multiple sound sources. For example, the position of an object that is a sound source can be determined based on the direction and size of the object.
- a publicly known technique may be used for estimating the sound source position.
- the following document 2 describes a method for estimating the sound source position using the beamforming method and the MUSIC method.
- the voice data and sound source number data indicating the number of sound sources determined for the voice data are provided to the voice extraction unit 103 .
- Sound source position data indicating the sound source position estimated by the sound source position estimation is provided to a movement processing unit 106 .
- the voice extraction unit 103 extracts the voice represented by the voice data for each sound source and generates extracted voice data representing the extracted voice, thereby generating a plurality of extracted voice data.
- Each of the plurality of extracted voice data corresponds to a respective one of the plurality of sound sources.
- the audio extraction unit 103 extracts extracted audio data, which is audio data for each sound source, from the audio data.
- the audio extraction unit 103 generates extracted audio data corresponding to one sound source among the multiple extracted audio data by subtracting remaining data obtained by separating audio from one sound source included in the multiple sound sources from the audio data.
- the extracted audio data is provided to the format conversion unit 104.
- FIG. 2 is a block diagram showing a schematic configuration of the voice extraction unit 103.
- the voice extraction unit 103 includes a noise reduction unit 110 and an extraction processing unit 111 .
- the noise reduction unit 110 reduces noise from the voice data. Any known technique may be used as the noise reduction method.
- the noise reduction unit 110 may reduce noise using a Global Sidelobe Canceller (GSC) described in the following document 5.
- GSC Global Sidelobe Canceller
- the extraction processing unit 111 extracts extracted audio data, which is audio data for each sound source, from the processed audio data.
- the extraction processing unit 111 includes a sound source separation unit 112 , a phase adjustment unit 113 , and a subtraction unit 114 .
- the sound source separation unit 112 separates the sound data for each sound source from the processed sound data to generate separated sound data.
- a publicly known method may be used to separate the sound data for each sound source.
- the sound source separation unit 112 performs separation using a technique called ILRMA (Independent Low-Rank Matrix Analysis) described in the following document 4.
- the phase adjustment unit 113 extracts the phase rotation given for each sound source in the signal processing used for sound source separation in the sound source separation unit 112, and generates phase-adjusted sound data by giving the processed sound data an opposite phase rotation that cancels the phase rotation.
- the phase-adjusted sound data is given to the subtraction unit 114.
- the subtraction unit 114 extracts extracted audio data, which is audio data for each sound source, by subtracting the phase-adjusted audio data from the processed audio data for each sound source.
- the format conversion unit 104 converts the format of the multiple extracted sound data into a stereophonic format, thereby generating multiple stereophonic sounds corresponding to multiple sound sources.
- the format conversion unit 104 converts the extracted audio data into a stereophonic format.
- the format conversion unit 104 converts the format of the extracted audio data into the Ambisonics B format, which is a stereophonic format, to generate stereophonic data representing a stereophonic sound.
- the format conversion unit 104 converts the Ambisonics A format of the extracted sound data into the Ambisonics B format.
- the method of converting from the Ambisonics A format to the Ambisonics B format may use a known technique.
- the following document 5 describes a method of converting from the Ambisonics A format to the Ambisonics B format.
- the format conversion unit 104 can convert the format of the extracted sound data into the Ambisonics B format using known technology.
- the following document 6 describes a method of generating the Ambisonics B format by generating bidirectionality using beamforming on the results of sound collected by omnidirectional microphones.
- the position acquisition unit 105 acquires an auditory position, which is a position where sound is heard. For example, the position acquisition unit 105 acquires an auditory position where the user listens to sound in the virtual space by receiving a specification of the auditory position from the user via an input I/F such as a mouse or keyboard (not shown). Here, it is assumed that the user can move in the virtual space, so the position acquisition unit 105 acquires the auditory position periodically or each time the movement of the user is detected. Then, the position acquisition unit 105 provides the movement processing unit 106 with position data indicating the acquired hearing position.
- an auditory position which is a position where sound is heard. For example, the position acquisition unit 105 acquires an auditory position where the user listens to sound in the virtual space by receiving a specification of the auditory position from the user via an input I/F such as a mouse or keyboard (not shown).
- the position acquisition unit 105 acquires the auditory position periodically or each time the movement of the user is detected. Then, the position acquisition unit 105 provides the
- the movement processing unit 106 calculates the angle and distance between the hearing position and each of a plurality of sound source positions. For example, the movement processing unit 106 calculates the angle and distance between the hearing position and each sound source position from the hearing position indicated by the position data and the sound source position indicated by the sound source position data. Then, the movement processing unit 106 provides angle and distance data indicating the calculated angle and distance for each sound source to the angle and distance adjustment unit 107 .
- the angle and distance adjustment unit 107 adjusts each of the plurality of stereo sounds at an angle and distance corresponding to each of the plurality of sound source positions, thereby generating a plurality of adjusted stereo sounds which are the plurality of stereo sounds at the auditory position.
- the angle distance adjustment unit 107 adjusts the stereophonic sound data for each sound source so that the angle and distance are at the angle and distance indicated by the angle distance data.
- the angular distance adjustment unit 107 can easily change the angle corresponding to the arrival direction of a sound from a sound source in the Ambisonics B format in accordance with the Ambisonics standard.
- the angular distance adjustment unit 107 also adjusts the amplitude of the stereophonic sound data according to the distance indicated by the angular distance data. For example, if the distance between the hearing position and the sound source is half the distance between the capture position and the sound source when the audio data was acquired, the angular distance adjustment unit 107 increases the amplitude by 6 dB. In other words, the angular distance adjustment unit 107 may adjust the relationship between distance and amplitude according to, for example, the square law.
- the angle and distance adjustment unit 107 provides the superimposition unit 108 with adjusted stereophonic sound data that indicates an adjusted stereophonic sound, which is a stereophonic sound with adjusted angle and distance, for each sound source.
- the overlapping unit 108 overlaps a plurality of adjusted stereo sounds.
- the superimposing unit 108 superimposes the adjusted stereophonic data for each sound source.
- the superimposing unit 108 adds together the sound signals represented by the adjusted stereophonic data for each sound source. In this way, the superimposing unit 108 generates synthetic sound data representing the added sound signals.
- the synthetic sound data is provided to the output processing unit 109.
- the output processing unit 109 generates output sound data indicating the output sound by converting the channel-based sound represented by the synthetic sound data into binaural sound, which is sound to be heard with both ears.
- a publicly known method may be used to convert the channel-based sound into binaural sound.
- the following document 7 describes a method of converting channel-based sound into binaural sound.
- the output processing unit 109 outputs the output sound data to an audio output device such as a speaker via a connection I/F (not shown), for example.
- the output processing unit 109 outputs the output sound data to an audio output device such as a speaker via a communication I/F (not shown).
- the above-described sound space construction device 100 can be realized by a computer 10 as shown in FIG.
- the computer 10 includes an auxiliary storage device 11 such as a hard disk drive (HDD) and a solid state drive (SSD), a memory 12, a processor 13 such as a central processing unit (CPU), an input I/F 14 such as a keyboard and a mouse, a connection I/F 15 such as a universal serial bus (USB), and a communication I/F 16 such as a network interface card (NIC).
- auxiliary storage device 11 such as a hard disk drive (HDD) and a solid state drive (SSD)
- a memory 12 such as a central processing unit (CPU), an input I/F 14 such as a keyboard and a mouse, a connection I/F 15 such as a universal serial bus (USB), and a communication I/F 16 such as a network interface card (NIC).
- the voice acquisition unit 101, sound source determination unit 102, voice extraction unit 103, format conversion unit 104, position acquisition unit 105, movement processing unit 106, angle distance adjustment unit 107, superimposition unit 108 and output processing unit 109 can be realized by the processor 13 loading a program stored in the auxiliary storage device 11 into the memory 12 and executing the program.
- the program may be downloaded to the auxiliary storage device 11 from a recording medium via a reader/writer (not shown) or from a network via the communication I/F 16, and then loaded onto the memory 12 and executed by the processor 13.
- the program may also be loaded directly onto the memory 12 from a recording medium via a reader/writer or from a network via the communication I/F 16, and executed by the processor 13.
- the direction from which a sound comes from the sound source can be changed depending on the direction in which the user is facing.
- the angle between the user 22 and the first sound source 20 changes from angle ⁇ 1 to angle ⁇ 2
- the angle between the user 22 and the second sound source 21 changes from angle ⁇ 3 to angle ⁇ 4 .
- the conventional Ambisonics method can accommodate uniform angle changes, such as changes in the user's orientation, but cannot accommodate angle changes for each sound source, as shown in Figure 4.
- extracted audio data from the first sound source 20 and extracted audio data from the second sound source 21 are extracted from the audio data and processed.
- the first embodiment changes the angle between the user 22 and the first sound source 20 from a first angle ⁇ 1 to a second angle ⁇ 2. Furthermore, the first embodiment also changes the intensity of the sound from the first sound source 20 in response to the change from a first distance d 1 between the first hearing position 23 and the first sound source 20 to a second distance d 2 between the second hearing position 24 and the first sound source 20.
- the first embodiment when the user 22 moves from the first hearing position 23 to the second hearing position 24, the first embodiment changes the angle between the user 22 and the second sound source 21 from the third angle ⁇ 3 to a fourth angle ⁇ 4 . Furthermore, the first embodiment also changes the intensity of the sound from the second sound source 21 in response to the change from the third distance d3 between the first hearing position 23 and the second sound source 21 to the fourth distance d4 between the second hearing position 24 and the second sound source 21.
- the data processed for each sound source is superimposed in the above manner, thereby changing the sound in accordance with the movement of the user. Therefore, according to the first embodiment, even if a plurality of sound sources exist, it is possible to reproduce a sound field at a free position in a virtual space.
- FIG. 7 is a block diagram showing a schematic configuration of a sound space construction system 230 according to the second embodiment.
- the sound space construction system 230 includes a sound space construction device 200 and a sound collection device 240 .
- the sound space construction device 200 and the sound collection device 240 are connected via a network 231 such as the Internet.
- the sound collection device 240 captures sound in a space separate from the sound space construction device 200 and transmits audio data representing the sound to the sound space construction device 200 via the network 231.
- FIG. 8 is a block diagram showing a schematic configuration of the sound collection device 240. As shown in FIG.
- the sound collection device 240 includes a sound collection unit 241 , a control unit 242 , and a communication unit 243 .
- the sound collection unit 241 captures sound in the space in which the sound collection device 240 is installed.
- the sound collection unit 241 can be configured, for example, with an Ambisonics microphone or multiple omnidirectional microphones.
- the control unit 242 controls the processing in the sound collection device 240 .
- the control unit 242 generates audio data indicating the sound captured by the sound collection unit 241 and sends the audio data to the sound space construction device 200 via the communication unit 243.
- control unit 242 when the control unit 242 receives a direction from which to capture sound from the sound space construction device 200 via the communication unit 243, it controls the sound collection unit 241 to generate sound data indicating the sound from that direction and send it to the sound space construction device 200. This is the process performed when beamforming is performed by the sound space construction device 200.
- control unit 242 can be configured with a memory and a processor such as a CPU (Central Processing Unit) that executes a program stored in the memory.
- a program may be provided over a network, or may be provided by recording it on a recording medium. In other words, such a program may be provided, for example, as a program product.
- control unit 242 can be configured with a processing circuit such as a single circuit, a composite circuit, a processor operated by a program, a parallel processor operated by a program, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate Array).
- a processing circuit such as a single circuit, a composite circuit, a processor operated by a program, a parallel processor operated by a program, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate Array).
- the control unit 242 can be realized by a processing circuit network.
- the communication unit 243 communicates with the sound space construction device 200 via the network 231 .
- the communication unit 243 transmits audio data to the sound space construction device 200 via the network 231 .
- the communication unit 243 receives instructions from the sound space construction device 200 via the network 231 and provides the instructions to the control unit 242 .
- the communication unit 243 can be realized by a communication I/F such as a NIC, although this is not shown.
- FIG. 9 is a block diagram showing a schematic configuration of a sound space construction device 200 according to the second embodiment.
- the sound space construction device 200 includes an audio acquisition unit 201, a sound source determination unit 202, an audio extraction unit 103, a format conversion unit 104, a position acquisition unit 105, a movement processing unit 106, an angle distance adjustment unit 107, a superimposition unit 108, an output processing unit 109, and a communication unit 220.
- the audio extraction unit 103, format conversion unit 104, position acquisition unit 105, movement processing unit 106, angle distance adjustment unit 107, superimposition unit 108, and output processing unit 109 of the sound space construction device 200 in embodiment 2 are similar to the audio extraction unit 103, format conversion unit 104, position acquisition unit 105, movement processing unit 106, angle distance adjustment unit 107, superimposition unit 108, and output processing unit 109 of the sound space construction device 100 in embodiment 1.
- the communication unit 220 communicates with the sound collection device 240 via a network 231 .
- the communication unit 220 receives audio data from the sound collection device 240 via the network 231 .
- the communication unit 220 transmits instructions to the sound collection device 240 via the network 231 .
- the communication unit 220 can be realized by the communication I/F 16 shown in FIG.
- the audio acquisition unit 201 acquires audio data from the sound collection device 240 via the communication unit 220.
- the acquired audio data is provided to the sound source determination unit 202.
- the audio data is data indicating audio captured by the sound collection device 240 connected to the sound space construction device 200 via the network 231.
- the sound source determination unit 202 performs sound source number determination for determining the number of sound sources included in the voice data, and sound source position estimation for estimating the sound source positions that are the positions of the sound sources included in the voice data.
- the sound source number determination and sound source position estimation may be performed by the same processing as in the first embodiment.
- the sound source determination unit 202 sends an instruction indicating the direction in which to capture the sound to the sound collection device 240 via the communication unit 220.
- a virtual space can be constructed using the sound transmitted from the remote location.
- FIG. 10 is a block diagram showing a schematic configuration of a sound space construction device 300 according to the third embodiment.
- the sound space construction device 300 includes an audio acquisition unit 101, a sound source determination unit 102, an audio extraction unit 103, a format conversion unit 104, a position acquisition unit 105, a movement processing unit 106, an angular distance adjustment unit 107, a superposition unit 308, an output processing unit 109, a separate audio acquisition unit 321, and an angular distance adjustment unit 322.
- the audio acquisition unit 101, the sound source determination unit 102, the audio extraction unit 103, the format conversion unit 104, the position acquisition unit 105, the movement processing unit 106, the angular distance adjustment unit 107 and the output processing unit 109 of the sound space construction device 300 of embodiment 3 are similar to the audio acquisition unit 101, the sound source determination unit 102, the audio extraction unit 103, the format conversion unit 104, the position acquisition unit 105, the movement processing unit 106, the angular distance adjustment unit 107 and the output processing unit 109 of the sound space construction device 100 of embodiment 1.
- the movement processing unit 106 also provides the angular distance data to the angular distance adjustment unit 322 .
- the separate audio acquisition unit 321 acquires audio data generated by a sound collection device (not shown) such as a microphone.
- the audio data acquired by the separate audio acquisition unit 321 is assumed to be audio data that differs from the audio data acquired by the audio acquisition unit 101 in at least one of the time and position at which it was captured.
- the audio data acquired by the separate audio acquisition unit 321 is also referred to as audio data for superimposition.
- the audio data to be superimposed is assumed to be data that has been separated for each sound source and converted into Ambisonics B format by processing similar to that performed by the sound source determination unit 102, the audio extraction unit 103, and the format conversion unit 104 in embodiment 1.
- the separate audio acquisition unit 321 acquires audio data for superimposition that indicates a stereophonic sound for superimposition, which is a stereophonic sound generated by converting audio data of an audio that is different from the audio contained in the audio data acquired by the audio acquisition unit 101 in at least one of the time and place of capture into a stereophonic format.
- the sound of the audio data to be superimposed is preferably captured by an Ambisonics microphone, which is a microphone compatible with the Ambisonics system, but may also be captured by multiple omnidirectional microphones.
- the separate audio acquisition unit 321 may also acquire audio data from a sound collection device via a connection I/F (not shown), or may acquire audio data from a network such as the Internet via a communication I/F (not shown). Furthermore, the separate audio acquisition unit 321 may acquire the audio data to be superimposed from a storage unit (not shown). The acquired audio data to be superimposed is provided to the angle distance adjustment unit 322.
- the angular distance adjustment section 322 functions as a superimposition angular distance adjustment section that generates, from the superimposition stereo sound, an adjusted superimposition stereo sound, which is a stereo sound at the hearing position.
- the angle distance adjustment unit 322 adjusts the superimposed audio data for each sound source so that the angle and distance are indicated by the angle distance data. For example, when the superimposed audio data indicates a past audio in the same location as the audio of the audio data acquired by the audio acquisition unit 101, the angle distance adjustment unit 322 adjusts the angle and amplitude according to the angle distance data.
- the method of adjusting the angle and amplitude is the same as the adjustment method by the angle distance adjustment unit 107 in the first embodiment.
- the audio data for superimposition indicates audio in a different location than the audio of the audio data acquired by the audio acquisition unit 101
- a criterion for adjusting the angle and amplitude for each sound source according to the angle and distance indicated by the angle distance data is predefined, and the angle distance adjustment unit 322 adjusts the angle and amplitude of the audio data for superimposition according to that criterion.
- the angle and distance adjustment unit 322 provides the superimposition unit 308 with adjusted audio data for superimposition that indicates the adjusted stereophonic sound for superimposition, which is a stereophonic sound for superimposition for which the angle and distance have been adjusted for each sound source.
- the superimposing unit 308 superimposes the plurality of adjusted stereophonic sounds and the adjusted stereophonic sounds for superimposition. For example, the superimposing unit 308 superimposes the adjusted stereophonic data for each sound source and the adjusted audio data for superimposition. Specifically, the superimposing unit 308 adds together the sound signal represented by the adjusted stereophonic data for each sound source and the sound signal represented by the adjusted audio data for superimposition. In this way, the superimposing unit 308 generates synthetic sound data representing the added sound signal. The synthetic sound data is provided to the output processing unit 109.
- the separate audio acquisition unit 321 and angle distance adjustment unit 322 described above can also be realized by the processor 13 shown in FIG. 3 loading a program stored in the auxiliary storage device 11 into the memory 12 and executing that program.
- different sounds that do not actually occur can be added to the virtual space, which can improve the value of long distance travel, for example.
- the user can listen to past sounds at a hearing position in the virtual space, or sounds in a space other than the virtual space.
- the user can listen to sounds recorded inside Shuri Castle, which no longer exists, in the virtual space.
- 100, 200, 300 Sound space construction device 101, 201 Audio acquisition unit, 102, 202 Sound source determination unit, 103 Audio extraction unit, 104 Format conversion unit, 105 Position acquisition unit, 106 Movement processing unit, 107 Angular distance adjustment unit, 108, 308 Superimposition unit, 109 Output processing unit, 110 Noise reduction unit, 111 Extraction processing unit, 112 Sound source separation unit, 113 Phase adjustment unit, 114 Subtraction unit, 220 Communication unit, 321 Separate audio acquisition unit, 322 Angular distance adjustment unit, 230 Sound space construction system, 231 Network, 240 Sound collection device, 241 Sound collection unit, 242 Control unit, 243 Communication unit.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
- Stereophonic System (AREA)
Abstract
Description
図1は、実施の形態1に係る音空間構築装置100の構成を概略的に示すブロック図である。
音空間構築装置100は、音声取得部101と、音源判定部102と、音声抽出部103と、フォーマット変換部104と、位置取得部105と、移動処理部106と、角度距離調整部107と、重畳部108と、出力処理部109とを備える。
例えば、音声取得部101は、マイク等の集音装置(図示しない)で生成された音声データを取得する。音声データの音声は、アンビソニックス方式に対応したマイクであるアンビソニックスマイクで捕捉されることが望ましいが、複数の無指向マイクで捕捉されてもよい。また、音声取得部101は、図示しない接続I/F(InterFace)を介して、集音装置から音声データを取得してもよく、図示しない通信I/Fを介して、インターネット等のネットワークから音声データを取得してもよい。取得された音声データは、音源判定部102に与えられる。
例えば、音源判定部102は、音声データに含まれている音源の数を判定する音源数判定と、音声データに含まれている音源の位置である音源位置を推定する音源位置推定とを行う。
音源位置推定による音源位置を示す音源位置データは、移動処理部106に与えられる。
例えば、音声抽出部103は、音声データから音源毎の音声データである抽出音声データを抽出する。具体的には、音声抽出部103は、音声データから、複数の音源に含まれる一つの音源からの音声を分離した残りのデータを、その音声データから減算することで、複数の抽出音声データの内、その一つの音源に対応する抽出音声データを生成する。抽出音声データは、フォーマット変換部104に与えられる。
音声抽出部103は、騒音低減部110と、抽出処理部111とを備える。
騒音低減部110は、音声データから騒音を低減する。騒音の低減方法は、公知の技術が使用されればよい。例えば、騒音低減部110は、下記の文献5に記載されているGSC(Global Sidelobe Canceller)を用いて、騒音を低減すればよい。音声データから騒音が低減された処理済音声データは、抽出処理部111に与えられる。
抽出処理部111は、音源分離部112と、位相調整部113と、減算部114とを備える。
例えば、フォーマット変換部104は、抽出音声データを立体音響フォーマットに変換する。ここでは、フォーマット変換部104は、抽出音声データのフォーマットを、立体音響フォーマットであるアンビソニックスBフォーマットに変換することで、立体音を示す立体音データを生成する。
そして、位置取得部105は、取得された聴覚位置を示す位置データを移動処理部106に与える。
例えば、移動処理部106は、位置データで示される聴覚位置と、音源位置データで示される音源位置とから、音源位置毎に、聴覚位置との間の角度及び距離を算出する。
そして、移動処理部106は、音源毎に、算出された角度及び距離を示す角度距離データを角度距離調整部107に与える。
例えば、角度距離調整部107は、音源毎に、立体音データを、角度距離データで示される角度及び距離となるように調整する。
例えば、角度距離調整部107は、アンビソニックスの規格に従って、アンビソニックスBフォーマットにおける音源からの音の到来方向に対応する角度を、容易に変更することができる。
例えば、重畳部108は、音源毎の調整済立体音データを重畳する。具体的には、重畳部108は、音源毎の調整済立体音データで示される音信号を足し合わせる。これにより、重畳部108は、足し合わされた音信号を示す合成音データを生成する。合成音データは、出力処理部109に与えられる。
コンピュータ10は、例えば、HDD(Hard Disk Drive)及びSSD(Solid State Drive)等の補助記憶装置11と、メモリ12と、CPU(Central Processing Unit)等のプロセッサ13と、キーボード及びマウス等の入力I/F14と、USB(Universal Serial Bus)等の接続I/F15と、NIC(Network Interface Card)等の通信I/F16とを備える。
しかしながら、図4に示されているように、第1の音源20及び第2の音源21のように複数の音源がある場合において、ユーザ22が第1の聴覚位置23から第2の聴覚位置24に移動すると、ユーザ22と第1の音源20との間の角度は、角度θ1から角度θ2に変わり、ユーザ22と第2の音源21との間の角度は、角度θ3から角度θ4に変わる。
このため、実施の形態1によれば、複数の音源が存在していても、仮想空間における自由位置での音場を再現することができる。
図7は、実施の形態2に係る音空間構築システム230の構成を概略的に示すブロック図である。
音空間構築システム230は、音空間構築装置200と、集音装置240とを備える。
音空間構築装置200と、集音装置240とは、インターネット等のネットワーク231で接続されている。
集音装置240は、集音部241と、制御部242と、通信部243とを備える。
例えば、制御部242は、集音部241で捕捉された音声を示す音声データを生成して、通信部243を介して、その音声データを音空間構築装置200に送る。
以上のように、制御部242は、処理回路網により実現することができる。
例えば、通信部243は、ネットワーク231を介して、音空間構築装置200に音声データを送信する。
また、通信部243は、ネットワーク231を介して、音空間構築装置200からの指示を受信し、その指示を制御部242に与える。
音空間構築装置200は、音声取得部201と、音源判定部202と、音声抽出部103と、フォーマット変換部104と、位置取得部105と、移動処理部106と、角度距離調整部107と、重畳部108と、出力処理部109と、通信部220とを備える。
例えば、通信部220は、ネットワーク231を介して、集音装置240からの音声データを受信する。
また、通信部220は、ネットワーク231を介して、集音装置240に指示を送信する。
なお、通信部220は、図3に示されている通信I/F16により実現することができる。
なお、音源判定部202は、例えば、ビームフォーミング法及びMUSIC法により音源位置の推定を行う場合には、音声を捕捉する方向を示す指示を、通信部220を介して、集音装置240に送る。
図10は、実施の形態3に係る音空間構築装置300の構成を概略的に示すブロック図である。
音空間構築装置300は、音声取得部101と、音源判定部102と、音声抽出部103と、フォーマット変換部104と、位置取得部105と、移動処理部106と、角度距離調整部107と、重畳部308と、出力処理部109と、別音声取得部321と、角度距離調整部322とを備える。
但し、移動処理部106は、角度距離データを角度距離調整部322にも与える。
角度距離調整部322は、音源毎に、重畳用音声データを、角度距離データで示される角度及び距離となるように調整する。例えば、重畳用音声データが、音声取得部101で取得される音声データの音声と同じ場所における過去の音声を示す場合には、角度距離調整部322は、角度距離データに従って、角度及び振幅を調整すればよい。角度及び振幅の調整方法については、実施の形態1における角度距離調整部107での調整方法と同様である。
例えば、重畳部308は、音源毎の調整済立体音データ及び重畳用調整済音声データを重畳する。具体的には、重畳部308は、音源毎の調整済立体音データで示される音信号及び調整済重畳用音声データで示される音信号を足し合わせる。これにより、重畳部308は、足し合わされた音信号を示す合成音データを生成する。合成音データは、出力処理部109に与えられる。
文献2:浅野 太、「音のアレイ信号処理-音源の定位・追跡と分離」、4.5章、コロナ社、2011年
文献3:浅野 太、「音のアレイ信号処理-音源の定位・追跡と分離」、4.5章、コロナ社、2011年
文献4:北村他、「独立低ランク行列分析に基づくブラインド音源分離」、IEICE Technical Report、EA2017-56、vol.117、No.255、pp.73-80、Toyama,October 2017
文献5:西村 竜一、「アンビソニックス」、映像情報メディア学会誌、Vol. 68、No. 8、pp.616-620、2014年
文献6:特許第6742535号公報
文献7:特許第4969978号公報
Claims (8)
- 複数の音源からの音声を含む音声データを取得する音声取得部と、
前記音声データから、前記複数の音源の位置である複数の音源位置を判定する音源判定部と、
前記音声データで示される音声を音源毎に抽出して、抽出された音声を示す抽出音声データを生成することで、複数の抽出音声データを生成する音声抽出部と、
前記複数の抽出音声データのフォーマットを、立体音響のフォーマットに変換することで、前記複数の音源に対応する複数の立体音を生成するフォーマット変換部と、
音声を聴く位置である聴覚位置を取得する位置取得部と、
前記聴覚位置と、前記複数の音源位置の各々との間の角度及び距離を算出する移動処理部と、
前記複数の立体音のそれぞれを、前記複数の音源位置のそれぞれに対応する角度及び距離で調整することで、前記聴覚位置における複数の立体音である複数の調整済立体音を生成する角度距離調整部と、
前記複数の調整済立体音を重畳する重畳部と、を備えること
を特徴とする音空間構築装置。 - 前記音声抽出部は、前記音声データから、前記複数の音源に含まれる一つの音源からの音声を分離した残りのデータを、前記音声データから減算することで、前記複数の抽出音声データの内、前記一つの音源に対応する抽出音声データを生成すること
を特徴とする請求項1に記載の音空間構築装置。 - 前記音源判定部は、前記複数の音源が含まれる空間を撮像した画像を用いて、前記複数の音源位置を判定すること
を特徴とする請求項1又は2に記載の音空間構築装置。 - 前記音声データは、前記音空間構築装置に対してネットワークで接続された集音装置で捕捉された音声を示すデータであること
を特徴とする請求項1から3の何れか一項に記載の音空間構築装置。 - 前記音声データに含まれている音声とは、捕捉された時及び場所の少なくとも何れか一方において異なる音声の音声データを、前記立体音響のフォーマットに変換することで生成された立体音である重畳用立体音を示す重畳用音声データを取得する別音声取得部と、
前記重畳用立体音から、前記聴覚位置における立体音である重畳用調整済立体音を生成する重畳用角度距離調整部と、をさらに備え、
前記重畳部は、前記複数の調整済立体音及び前記重畳用調整済立体音を重畳すること
を特徴とする請求項1から4の何れか一項に記載の音空間構築装置。 - 音空間構築装置と、前記音空間構築装置に対してネットワークで接続され、複数の音源からの音声を含む音声データを生成する集音装置とを備える音空間構築システムであって、
前記音空間構築装置は、
前記集音装置と通信を行う通信部と、
前記通信部を介して、前記音声データを取得する音声取得部と、
前記音声データから、前記複数の音源の位置である複数の音源位置を判定する音源判定部と、
前記音声データで示される音声を音源毎に抽出して、抽出された音声を示す抽出音声データを生成することで、複数の抽出音声データを生成する音声抽出部と、
前記複数の抽出音声データのフォーマットを、立体音響のフォーマットに変換することで、前記複数の音源に対応する複数の立体音を生成するフォーマット変換部と、
音声を聴く位置である聴覚位置を取得する位置取得部と、
前記聴覚位置と、前記複数の音源位置の各々との間の角度及び距離を算出する移動処理部と、
前記複数の立体音のそれぞれを、前記複数の音源位置のそれぞれに対応する角度及び距離で調整することで、前記聴覚位置における複数の立体音である複数の調整済立体音を生成する角度距離調整部と、
前記複数の調整済立体音を重畳する重畳部と、を備えること
を特徴とする音空間構築システム。 - コンピュータを、
複数の音源からの音声を含む音声データを取得する音声取得部、
前記音声データから、前記複数の音源の位置である複数の音源位置を判定する音源判定部、
前記音声データで示される音声を音源毎に抽出して、抽出された音声を示す抽出音声データを生成することで、複数の抽出音声データを生成する音声抽出部、
前記複数の抽出音声データのフォーマットを、立体音響のフォーマットに変換することで、前記複数の音源に対応する複数の立体音を生成するフォーマット変換部、
音声を聴く位置である聴覚位置を取得する位置取得部、
前記聴覚位置と、前記複数の音源位置の各々との間の角度及び距離を算出する移動処理部、
前記複数の立体音のそれぞれを、前記複数の音源位置のそれぞれに対応する角度及び距離で調整することで、前記聴覚位置における複数の立体音である複数の調整済立体音を生成する角度距離調整部、及び、
前記複数の調整済立体音を重畳する重畳部、として機能させること
を特徴とするプログラム。 - 複数の音源からの音声を含む音声データを取得し、
前記音声データから、前記複数の音源の位置である複数の音源位置を判定し、
前記音声データで示される音声を音源毎に抽出して、抽出された音声を示す抽出音声データを生成することで、複数の抽出音声データを生成し、
前記複数の抽出音声データのフォーマットを、立体音響のフォーマットに変換することで、前記複数の音源に対応する複数の立体音を生成し、
音声を聴く位置である聴覚位置を取得し、
前記聴覚位置と、前記複数の音源位置の各々との間の角度及び距離を算出し、
前記複数の立体音のそれぞれを、前記複数の音源位置のそれぞれに対応する角度及び距離で調整することで、前記聴覚位置における複数の立体音である複数の調整済立体音を生成し、
前記複数の調整済立体音を重畳すること
を特徴とする音空間構築方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024542112A JP7558467B2 (ja) | 2022-09-28 | 2022-09-28 | 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 |
PCT/JP2022/036165 WO2024069796A1 (ja) | 2022-09-28 | 2022-09-28 | 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 |
DE112022007568.6T DE112022007568T5 (de) | 2022-09-28 | 2022-09-28 | Schallraumkonstruktionseinrichtung, schallraumkonstruktionssystem, programm und schallraumkonstruktionsverfahren |
US19/087,040 US20250220387A1 (en) | 2022-09-28 | 2025-03-21 | Sound space construction device, sound space construction system, storage medium storing program, and sound space construction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/036165 WO2024069796A1 (ja) | 2022-09-28 | 2022-09-28 | 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US19/087,040 Continuation US20250220387A1 (en) | 2022-09-28 | 2025-03-21 | Sound space construction device, sound space construction system, storage medium storing program, and sound space construction method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024069796A1 true WO2024069796A1 (ja) | 2024-04-04 |
Family
ID=90476628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/036165 WO2024069796A1 (ja) | 2022-09-28 | 2022-09-28 | 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20250220387A1 (ja) |
JP (1) | JP7558467B2 (ja) |
DE (1) | DE112022007568T5 (ja) |
WO (1) | WO2024069796A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020527887A (ja) * | 2017-07-14 | 2020-09-10 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | 深度拡張DirAC技術またはその他の技術を使用して、拡張音場記述または修正音場記述を生成するための概念 |
JP2020167471A (ja) * | 2019-03-28 | 2020-10-08 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2020536286A (ja) * | 2017-10-04 | 2020-12-10 | フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. | DirACベース空間オーディオコーディングに関する符号化、復号、シーン処理、および他の手順のための装置、方法、およびコンピュータプログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3503592B1 (en) | 2017-12-19 | 2020-09-16 | Nokia Technologies Oy | Methods, apparatuses and computer programs relating to spatial audio |
-
2022
- 2022-09-28 DE DE112022007568.6T patent/DE112022007568T5/de active Pending
- 2022-09-28 WO PCT/JP2022/036165 patent/WO2024069796A1/ja active Application Filing
- 2022-09-28 JP JP2024542112A patent/JP7558467B2/ja active Active
-
2025
- 2025-03-21 US US19/087,040 patent/US20250220387A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020527887A (ja) * | 2017-07-14 | 2020-09-10 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | 深度拡張DirAC技術またはその他の技術を使用して、拡張音場記述または修正音場記述を生成するための概念 |
JP2020536286A (ja) * | 2017-10-04 | 2020-12-10 | フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. | DirACベース空間オーディオコーディングに関する符号化、復号、シーン処理、および他の手順のための装置、方法、およびコンピュータプログラム |
JP2020167471A (ja) * | 2019-03-28 | 2020-10-08 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
DE112022007568T5 (de) | 2025-05-08 |
JPWO2024069796A1 (ja) | 2024-04-04 |
US20250220387A1 (en) | 2025-07-03 |
JP7558467B2 (ja) | 2024-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107925815B (zh) | 空间音频处理装置 | |
CN108369811B (zh) | 分布式音频捕获和混合 | |
JP6665379B2 (ja) | 聴覚支援システムおよび聴覚支援装置 | |
US9877133B2 (en) | Sound collection and reproduction system, sound collection and reproduction apparatus, sound collection and reproduction method, sound collection and reproduction program, sound collection system, and reproduction system | |
JP5229053B2 (ja) | 信号処理装置、および信号処理方法、並びにプログラム | |
JP2020500480A (ja) | デバイス内の非対称配列の複数のマイクからの空間メタデータの分析 | |
CN110537221A (zh) | 用于空间音频处理的两阶段音频聚焦 | |
US11812235B2 (en) | Distributed audio capture and mixing controlling | |
WO2017064368A1 (en) | Distributed audio capture and mixing | |
CN106872945B (zh) | 声源定位方法、装置和电子设备 | |
WO2014090277A1 (en) | Spatial audio apparatus | |
CN109314832B (zh) | 音频信号处理方法和设备 | |
JP2020500480A5 (ja) | ||
KR20090051614A (ko) | 마이크로폰 어레이를 이용한 다채널 사운드 획득 방법 및장치 | |
KR20160020377A (ko) | 음향 신호를 생성하고 재생하는 방법 및 장치 | |
CN105264911A (zh) | 音频设备 | |
KR20130116271A (ko) | 다중 마이크에 의한 3차원 사운드 포착 및 재생 | |
GB2591066A (en) | Spatial audio processing | |
KR101678305B1 (ko) | 텔레프레즌스를 위한 하이브리드형 3d 마이크로폰 어레이 시스템 및 동작 방법 | |
US10917718B2 (en) | Audio signal processing method and device | |
CN110890100B (zh) | 语音增强、多媒体数据采集、播放方法、装置及监控系统 | |
JP7558467B2 (ja) | 音空間構築装置、音空間構築システム、プログラム及び音空間構築方法 | |
JP6666276B2 (ja) | 音声信号変換装置、その方法、及びプログラム | |
WO2021212287A1 (zh) | 音频信号处理方法、音频处理装置及录音设备 | |
KR101586364B1 (ko) | 공간 음향 분할을 통한 동적 방향성 임펄스 응답을 생성하기 위한 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22960859 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024542112 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202517002871 Country of ref document: IN |
|
WWP | Wipo information: published in national office |
Ref document number: 202517002871 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112022007568 Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 112022007568 Country of ref document: DE |