WO2016002358A1 - 情報処理装置、情報処理方法及びプログラム - Google Patents
情報処理装置、情報処理方法及びプログラム Download PDFInfo
- Publication number
- WO2016002358A1 WO2016002358A1 PCT/JP2015/063919 JP2015063919W WO2016002358A1 WO 2016002358 A1 WO2016002358 A1 WO 2016002358A1 JP 2015063919 W JP2015063919 W JP 2015063919W WO 2016002358 A1 WO2016002358 A1 WO 2016002358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- listening environment
- characteristic information
- music signal
- environment characteristic
- signal
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1787—General system configurations
- G10K11/17879—General system configurations using both a reference signal and an error signal
- G10K11/17881—General system configurations using both a reference signal and an error signal the reference signal being an acoustic signal, e.g. recorded with a microphone
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1781—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
- G10K11/17813—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the acoustic paths, e.g. estimating, calibrating or testing of transfer functions or cross-terms
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1781—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
- G10K11/17821—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
- G10K11/17827—Desired external signals, e.g. pass-through audio such as music or speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1785—Methods, e.g. algorithms; Devices
- G10K11/17853—Methods, e.g. algorithms; Devices of the filter
- G10K11/17854—Methods, e.g. algorithms; Devices of the filter the filter being an adaptive filter
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1787—General system configurations
- G10K11/17885—General system configurations additionally using a desired external signal, e.g. pass-through audio such as music or speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/08—Arrangements for producing a reverberation or echo sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M9/00—Arrangements for interconnection not involving centralised switching
- H04M9/08—Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1008—Earpieces of the supra-aural or circum-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2410/00—Microphones
- H04R2410/05—Noise reduction with a separate noise microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/01—Hearing devices using active noise cancellation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program.
- Patent Document 1 is equipped with both a noise canceling function and a monitor function for superimposing and outputting an external audio signal (so-called monitor signal) on a music signal, while obtaining a noise reduction effect on the music signal,
- monitor signal an external audio signal
- the present disclosure proposes a new and improved information processing apparatus, information processing method, and program that can give a sense of openness to the user.
- a listening environment characteristic information acquisition unit that acquires listening environment characteristic information indicating characteristics of a listening environment based on external sound collected by at least one microphone, and the acquired listening environment characteristic information
- a music signal processing unit that filters the music signal with a filter characteristic based on the information processing apparatus.
- the processor acquires the listening environment characteristic information indicating the characteristics of the listening environment based on the external sound collected by the at least one microphone, and the processor acquires the acquired listening sound.
- An information processing method includes filtering a music signal with a filter characteristic based on environmental characteristic information.
- a function of acquiring listening environment characteristic information indicating characteristics of a listening environment based on external sound collected by at least one microphone in a computer processor, and the acquired listening environment A program for realizing a function of filtering a music signal with a filter characteristic based on characteristic information is provided.
- the listening space characteristic information representing the acoustic characteristics of the listening space is acquired based on the external sound. And the acoustic characteristic of listening space is provided to a music signal based on the acquired listening space characteristic information. Therefore, a more open music that is more familiar to external sounds can be provided to the user.
- First embodiment 1-1 Overview of First Embodiment 1-2.
- System configuration 1-3 About listening environment characteristic information acquisition unit 1-4.
- System configuration 2-3 About listening environment characteristic information acquisition unit 2-3-1.
- Music signal processor 3.
- Information processing method Modified example 4-1. Modified example in which sound pressure is adjusted 4-2.
- voice related to speech by a user wearing headphones (hereinafter also referred to as speech voice) is collected by a microphone as external voice.
- listening environment characteristic information representing the acoustic characteristics of a space where the user exists (hereinafter also referred to as a listening environment) is acquired based on the collected speech.
- the audio signal of the music content (hereinafter also referred to as a music signal) is filtered with a filter characteristic based on the acquired listening environment characteristic information.
- FIG. 1 is a schematic diagram illustrating a configuration example of the headphones according to the first embodiment.
- the headphone 100 includes a housing 140 attached to a user's ear, and a pair of microphones 110 a and 110 b provided on the outside and inside of the housing 140, respectively.
- FIG. 1 shows only the housing 140 that is worn on one ear of the user among the headphones 100, but the headphones 100 actually include a pair of housings 140.
- the other housing 140 may be attached to the other ear of the user.
- the headphones 100 may be so-called overhead headphones in which the pair of housings 140 are connected to each other by a support member that is curved in an arch shape, for example.
- the headphone 100 may be a so-called inner ear type headphone in which a pair of housings 140 are connected by a wire or a support member.
- the housing 140 is supplied with a driver unit (speaker) that generates sound by vibrating the diaphragm in accordance with the music signal, and the music signal is supplied to the driver unit.
- a driver unit speaker
- Various configurations of general headphones, such as a cable for performing, may be mounted.
- the microphone 110a provided outside the housing 140 is a microphone (hereinafter also referred to as an FF microphone 110a) provided for a noise canceling function by a so-called feedforward method. Based on the external sound collected by the FF microphone 110a, a sound signal (hereinafter also referred to as a noise cancellation signal) that cancels a sound that may be noise can be generated.
- the music signal on which the noise cancellation signal is superimposed is output from the speaker, so that music with reduced noise is provided to the user.
- the external sound collected by the FF microphone 110a may be used for a so-called monitor function that takes in the external sound and outputs it from a speaker.
- a sound signal (hereinafter also referred to as a monitor signal) for allowing the user to listen to the external sound can be generated.
- the music signal on which the monitor signal is superimposed is output from the speaker, so that external sound is provided to the user together with the music.
- the output of a sound corresponding to an audio signal from a speaker is also referred to as an output of an audio signal.
- the microphones 110a and 110b collecting sounds according to the audio signals are also referred to as collecting audio signals for convenience.
- the signals collected by the microphones 110a and 110b are also referred to as sound collection signals.
- the microphone 110b provided inside the housing 140 is a microphone (hereinafter also referred to as an FB microphone 110b) provided for a noise cancellation function by a so-called feedback method.
- the external sound that has leaked into the inside of the housing 140 is collected by the FB microphone 110b, and a noise cancellation signal can be generated based on the collected external sound.
- the headphones 100 can also function as an input / output device for inputting / outputting various types of information to / from an information processing device such as a smartphone.
- an information processing device such as a smartphone
- the user can input various instructions to the information processing apparatus by voice while wearing the headphones 100.
- the headphones 100 may function as a so-called headset, and the user may make a call via the headphones 100.
- the transfer function of the user's listening environment is used as the listening environment characteristic information based on the user's utterance voice such as the above instruction or a call.
- H 2 is calculated. Specifically, the speech of a user wearing the headphone 100, a transfer function of H 2 until it reaches the FF microphone 110a is for acoustic characteristics of the listening environment is reflected.
- the transfer function H 1 to the speech reaches the FB microphone 110b is, for example meat conduction and bone conduction or the like, the transfer function when the sound through the body of the user is transmitted, acoustic listening environment The characteristics are not reflected.
- the transfer function of H 2 listening environment it can be calculated.
- the filtering based on the calculated transfer function H 2 is performed on the music signal, so that the user can be provided with music that is more familiar with the external sound in consideration of the acoustic characteristics of the external environment. This makes it possible to give an open feeling.
- FIG. 2 is a block diagram illustrating a configuration example of the sound adjustment system according to the first embodiment.
- the acoustic adjustment system 10 includes a microphone 110, a speaker 120, and a control unit 130.
- the microphone 110 collects sound and converts the sound into an electric signal, thereby acquiring a signal corresponding to the sound (that is, a sound collection signal).
- the microphone 110 corresponds to the microphones 110a and 110b shown in FIG. 1, and schematically shows these together.
- the microphone 110 collects external sound used for the noise cancellation function and the monitor function.
- the microphone 110 collects the user's uttered voice in order to acquire listening environment characteristic information.
- the collected sound signal from the microphone 110 is appropriately amplified by an amplifier 111, converted into a digital signal by an ADC (Analog-to-Digital Converter) 112, and then a listening environment characteristic information acquisition unit 131 of a control unit 130, which will be described later, a monitor signal
- ADC Analog-to-Digital Converter
- the data is input to the generation unit 133 and the noise cancellation signal generation unit 134.
- an amplifier 111 and an ADC 112 are provided for each of the microphones 110a and 110b.
- the speaker 120 outputs a sound corresponding to the sound signal by vibrating the diaphragm according to the sound signal.
- the speaker 120 corresponds to a driver unit mounted on the headphones 100 shown in FIG.
- the speaker 120 can output a music signal that has been subjected to filtering based on the listening environment characteristic information (that is, the listening environment transfer function H 2 ).
- a noise cancellation signal and / or a monitor signal may be superimposed on the music signal output from the speaker 120.
- an audio signal which is converted into an analog signal by a DAC (Digital-to-Analog Converter) 122 and appropriately amplified by an amplifier 121 is output.
- DAC Digital-to-Analog Converter
- the control unit 130 is configured by various processors such as a CPU (Central Processing Unit) and a DSP (Digital Signal Processor), and various signals performed in the sound adjustment system 10. Execute the process.
- the control unit 130 includes a listening environment characteristic information acquisition unit 131, a music signal processing unit 132, a monitor signal generation unit 133, and a noise cancellation signal generation unit 134 as its functions.
- Each function of the control unit 130 can be realized by a processor constituting the control unit 130 operating according to a predetermined program.
- the processor constituting the control unit 130 may be mounted on the headphones 100 shown in FIG. 1 or an information processing device different from the headphones 100 shown in FIG. 1 (for example, a mobile terminal such as a smartphone carried by the user).
- control unit 130 may be mounted on.
- the function of the control unit 130 may be executed by a processor of an information processing apparatus such as a server provided on a network (so-called cloud).
- a processor of an information processing apparatus such as a server provided on a network (so-called cloud).
- the processor constituting the control unit 130 is mounted on a portable terminal or a server other than the headphone 100, the headphone 100 on which at least the speaker 120 and the microphone 110 are mounted is attached to the user, and the headphone 100 and the headphone 100 Various processes in the acoustic adjustment system 10 can be executed by transmitting and receiving various types of information to and from the portable terminal or the server.
- control unit 130 is connected to be able to communicate with an external device, and a music signal is input from the external device to the listening environment characteristic information acquisition unit 131 and the music signal processing unit 132 of the control unit 130.
- the external device may be a playback device capable of playing back music content such as a CD (Compact Disc) player, a DVD (Digital Versatile Disc) player, and a Blu-ray (registered trademark) player.
- the external device can read music signals recorded in accordance with various recording methods from various recording media.
- the above-described portable terminal may have the function of the external device (playback device).
- the listening environment characteristic information acquisition unit 131 acquires listening environment characteristic information representing the acoustic characteristics of the listening environment based on the external sound collected by the microphone 110.
- the listening environment characteristic information obtaining unit 131 based on the uttered voice of the user picked up by the microphone 110, as listening environment characteristic information to obtain a transfer function of H 2 listening environment.
- the listening environment characteristic information acquisition unit 131 provides information about the acquired transfer function H 2 to the music signal processing unit 132.
- the function of the listening environment characteristic information acquisition unit 131 will be described in detail below (1-3. About the listening environment characteristic information acquisition unit).
- the timing at which the listening environment characteristic information acquisition unit 131 starts acquiring the listening environment characteristic information is, for example, a predetermined condition (hereinafter referred to as listening environment characteristic information acquisition) such as power-on, a prescribed timer count (that is, a predetermined timing) (It is also referred to as a condition).
- listening environment characteristic information acquisition such as power-on
- a prescribed timer count that is, a predetermined timing
- the listening environment characteristic information acquisition unit 131 acquires the listening environment characteristic information at the timing when the user's utterance is detected. May start.
- the listening environment characteristic information acquisition condition is, for example, a GPS (Global Positioning System) sensor mounted on the portable terminal. The movement of the user may be detected by the sensor, or the operation input to the portable terminal may be detected.
- the music signal processing unit 132 performs predetermined signal processing on the music signal based on the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131.
- the music signal processing unit 132 filters the music signal based on the transfer function H 2 acquired by the listening environment characteristic information acquisition unit 131.
- the music signal processing unit 132 by filtering the music signal by a filter having a filter characteristic that reflects the characteristics of the transfer function of H 2 listening spaces, for example, the reverberation characteristics according to the external environment (initial reflection time And reverberation time) can be added to the music signal.
- the music signal subjected to the signal processing by the music signal processing unit 132 (hereinafter also referred to as a music signal after the signal processing) is appropriately adjusted by the variable amplifier 150a, and then the speaker is passed through the DAC 122 and the amplifier 121. 120.
- the music signal after the signal processing may be output to the speaker 120 in a state where the noise cancel signal and / or the monitor signal are added by the adder 160, as shown in FIG.
- the function of the music signal processing unit 132 will be described in detail in (1-4. Music signal processing unit).
- the monitor signal generation unit 133 Based on the external sound collected by the microphone 110, the monitor signal generation unit 133 generates a monitor signal that is a sound signal for allowing the user to listen to the external sound.
- the monitor signal generation unit 133 can adjust the sound related to the monitor signal (hereinafter also referred to as monitor sound) so that the external sound becomes a natural sound together with the sound leaked directly into the housing.
- the monitor signal generation unit 133 is configured by, for example, a high-pass filter (HPF) and a gain circuit, and a sound collection signal from the microphone 110 is input to the HPF via the amplifier 111 and the ADC 112.
- the cut-off frequency of the HPF may be set so as to remove a low-frequency component that is likely to contain an audible noise component.
- the monitor signal generated by the monitor signal generating unit 133 can be added to the music signal after the signal processing by the adder 160 after the gain is appropriately adjusted by the variable amplifier 150 b and output from the speaker 120. By superimposing the monitor signal, the user can listen to, for example, external sound such as in-car announcement with music while wearing the headphones 100.
- the noise cancellation signal generation unit 134 generates a noise cancellation signal that is an audio signal for canceling a noise component included in the external audio, based on the external audio collected by the microphone 110.
- the noise cancellation signal generation unit 134 includes an inverter that generates a signal having a phase opposite to that of the external audio signal, and a filter circuit that adjusts the cancellation band.
- a signal characteristic ⁇ corresponding to a noise canceling system based on the FF method is set in the noise cancellation signal generation unit 134, and the noise cancellation signal generation unit 134 corresponds to a sound collection signal by the FF microphone 110 a of the microphone 110.
- the signal characteristic ⁇ is provided.
- a noise cancellation signal is generated so that the external sound is canceled and listened to by the user in consideration of the transfer function of each circuit and space in the FF noise canceling system. As shown, it represents signal characteristics (for example, frequency-amplitude characteristics and frequency-phase characteristics) to be given to the collected sound signal.
- the filter circuit of the noise cancellation signal generation unit 134 can be configured to be able to give such a signal characteristic ⁇ to the collected sound signal, for example.
- the noise cancellation signal generated by the noise cancellation signal generation unit 134 may be added to the music signal after the signal processing by the adder 160 after the gain is appropriately adjusted by the variable amplifier 150 c and output from the speaker 120. By superimposing the noise cancellation signal, the user can listen to music with better sound quality with reduced noise.
- the noise cancellation signal generation unit 134 may generate a noise cancellation signal corresponding to a noise canceling system based on the FB method. In that case, the noise cancellation signal generation unit 134 may be configured to generate a noise cancellation signal by giving a predetermined signal characteristic to the sound collection signal by the FB microphone 110b in the microphone 110.
- monitor signal generation unit 133 and the noise cancellation signal generation unit 134 various known functions that are generally performed to generate the monitor signal and the noise cancellation signal may be applied. Therefore, detailed descriptions of the specific configurations of the monitor signal generation unit 133 and the noise cancellation signal generation unit 134 are omitted.
- functions of the monitor signal generation unit 133 and the noise cancellation signal generation unit 134 reference can be made to, for example, the description in Patent Document 1 that is a prior application by the applicant of the present application.
- the generation of the monitor signal by the monitor signal generation unit 133 and the generation of the noise cancellation signal by the noise cancellation signal generation unit 134 are not necessarily performed. Even when the monitor signal and the noise cancellation signal are not superimposed, the music signal processed by the music signal processing unit 132 based on the listening environment characteristic information is output to the user. The music with a more open feeling considered will be provided to the user.
- each processing in the control unit 130 may be executed by, for example, one processor or one information processing apparatus, or may be performed by a plurality of processors or a plurality of processors. It may be executed by cooperation of the information processing apparatus. Or as above-mentioned, these signal processing may be performed by information processing apparatuses, such as a server provided on a network (on what is called cloud), or an information processing apparatus group.
- the device configuration that can realize the acoustic adjustment system 10 according to the first embodiment is not limited to the configuration shown in FIG. 1 and may be arbitrary.
- the acoustic adjustment system 10 illustrated in FIG. 1 may be configured as an integrated device.
- An external device (playback device) that provides a music signal to the control unit 130 may also be included in the device.
- the device can be, for example, a headphone-type portable music player.
- FIG. 3 is a block diagram illustrating an example of a functional configuration of the listening environment characteristic information acquisition unit 131.
- the listening environment characteristic information acquisition unit 131 functions as an FB microphone signal buffer unit 161, an FB microphone signal FFT unit 162, a transfer function calculation unit 163, an FF microphone signal buffer unit 164, An FF microphone signal FFT unit 165.
- 3 illustrates the functional configuration of the listening environment characteristic information acquisition unit 131, and the configuration related to each function of the listening environment characteristic information acquisition unit 131 in the configuration of the acoustic adjustment system 10 illustrated in FIG. It is extracted and shown together.
- S is a parameter representing a voice (uttered voice) uttered from the user's mouth when various instructions are given to the information processing apparatus or during a telephone call.
- the transfer function of the speech from the mouth of the user to reach the FB microphone 110b and H 1 speech is the transfer function from the user's mouth to reach the FF microphone 110a and H 2.
- the transfer function H 1 is speech is represents the transfer function until it reaches the FB microphone 110a via, for example, meat conduction and bone conduction such as a user's body.
- the transfer function H 2 is speech is represents the transfer function up via the space (listening environment) there is a user reaches the FF microphone 110a.
- the transfer function H 1 indicates a sound transfer characteristic in a state that does not include the acoustic characteristics of the listening environment (for example, reverberation characteristics, reflection characteristics due to wall surfaces, etc.), and the transfer function H 2 indicates the external listening environment. It can be said that the transmission characteristic of the sound reflecting the acoustic characteristic is shown. Therefore, the transfer function H 1 does not change depending on the listening environment, and is a known value that can be acquired in advance by, for example, measuring in an anechoic room or the like when the headphones 100 are designed. On the other hand, the transfer function H 2 is the unknown value to be changed by the listening environment.
- listening environment characteristic information obtaining unit 131 based on the uttered voice of the user picked up respectively by FF microphones 110a and FB microphone 110b, a listening environment characteristic information, the transfer function of H 2 listening space may be obtained.
- the collected sound signal from the FB microphone 110b is appropriately amplified by the amplifier 111b, converted into a digital signal by the ADC 112b, and then input to the FB microphone signal buffer unit 161 of the listening environment characteristic information acquisition unit 131.
- the FB microphone signal buffer unit 161 buffers the sound pickup signal from the FB microphone 110b with a predetermined frame length, and provides it to the subsequent FB microphone signal FFT unit 162.
- the FB microphone signal FFT unit 162 performs Fast Fourier Transform (FFT) on the collected sound signal, and provides it to the transfer function calculation unit 163 in the subsequent stage. It picked up by the FB microphone 110b, collected sound signal input to the transfer function calculating unit 163 via the FB microphone signal buffer section 161 and FB microphone signal FFT unit 162, by using the parameters S and the transfer function H 1 " S * H 1 ”.
- the collected sound signal from the FF microphone 110a is appropriately amplified by the amplifier 111a, converted into a digital signal by the ADC 112a, and then input to the FF microphone signal buffer unit 164 of the listening environment characteristic information acquisition unit 131.
- the FF microphone signal buffer unit 164 buffers the sound collection signal from the FF microphone 110a with a predetermined frame length, and provides it to the FF microphone signal FFT unit 165 at the subsequent stage.
- the FF microphone signal FFT unit 165 performs fast Fourier transform on the collected sound signal and provides it to the transfer function calculation unit 163 in the subsequent stage. It picked up by the FF microphone 110a, sound collection signal inputted to the transfer function calculating unit 163 via the FF microphone signal buffer section 164 and FF microphone signal FFT unit 165, using the parameters S and the transfer function H 2 " S * H 2 ”.
- the signal S * H 1 and the signal S * H 2 may be known values acquired as measured values as described above.
- the transfer function H 1 may be a known value by prior measurement. Therefore, the transfer function calculating unit 163 can be based on the following equation (1), to calculate a transfer function of H 2 listening space.
- the transfer function calculation unit 163 provides the calculated transfer function H 2 of the listening space to the music signal processing unit 132.
- the music signal processing unit 132 so that the filtering of the various is performed on the music signal by using a transfer function of H 2 listening space.
- the timing at which the FB microphone signal buffer unit 161 and the FF microphone signal buffer unit 164 start buffering the collected sound signal may be the timing at which the listening environment characteristic information acquisition condition is detected.
- FIG. 4 is a block diagram illustrating a configuration example of the music signal processing unit 132.
- FIG. 4 schematically shows an example of a filter circuit that can constitute the music signal processing unit 132.
- the music signal processing unit 132 can be preferably configured by an FIR (Finite Impulse Response) filter.
- FIR Finite Impulse Response
- the transfer function h of the time domain representation obtained by inverse Fourier transform of the transfer function H 2 (frequency domain representation) of the listening space acquired by the listening environment characteristic information acquisition unit 131 as a parameter of the FIR filter.
- a filter circuit reflecting the acoustic characteristics of the space can be realized.
- the music signal processing unit 132 obtains a transfer function h expressed in time domain by the following mathematical formula (2), and then convolves the music signal with the FIR filter of the transfer function h. Thereby, the acoustic characteristics (for example, reverberation characteristics, frequency characteristics, etc.) of the listening space are given to the music signal.
- N is the number of points of the discrete Fourier transform.
- the music signal processing unit 132 outputs the music signal to which the acoustic characteristic of the listening space is given by filtering to the speaker 120 via the DAC 122 and the amplifier 121.
- the music signal filtered by the music signal processing unit 132 may be output from the speaker 120 with the noise canceling signal and / or the monitor signal superimposed by the adder 160. Good. Accordingly, music that is more familiar to the monitor sound can be provided to the user in a state where noise is further reduced.
- the transfer function H 2 is used in the frequency domain, the music signal is subjected to discrete Fourier transform, and the music signal after the discrete Fourier transform is multiplied by the transfer function H 2 in the frequency domain.
- a filter circuit having the same effect can be realized. Further, it is possible to mount the FIR filter and the FFT in combination.
- the transfer function h of the time domain expression is used as a parameter of the FIR filter.
- the music signal processing unit 132 uses the newly obtained FIR filter parameter (that is, If the transfer function h) is not significantly different from the current set value, these parameters in the FIR filter need not be updated. For example, when the difference between the current parameters set in the FIR filter and the new parameters obtained by the current measurement is larger than a predetermined threshold, the music signal processing unit 132 sets these parameters. It may be updated. If the characteristics of the FIR filter are changed too frequently, the music signal may fluctuate, which may impair the user's sense of listening. Therefore, when the newly obtained parameter does not greatly differ from the current set value, it is possible to provide music more stably to the user by not updating the parameter. It becomes possible.
- the transfer function H 2 of the listening space is acquired based on the user's uttered voice, and the acoustic characteristics of the listening space are included in the music signal based on the transfer function H 2. Is granted. Therefore, a more open music that is more familiar to external sounds can be provided to the user.
- the transfer function H 2 is an arbitrary value that the user has spoken as a normal operation, for example, when the user gives an instruction to the information processing apparatus by voice, or when the user is making a call using the telephone function. Can be acquired at timing. Therefore, even if the user does not dare to obtain the transfer function H 2 (speaking), the transfer function H 2 is automatically acquired based on words uttered by the user for other purposes, and the music signal is corrected. Therefore, the convenience for the user is improved.
- a predetermined measurement sound is used as the external sound.
- sound related to a music signal, uncorrelated noise in a listening environment, or the like can be used.
- the listening environment characteristic information is acquired based on the measurement sound collected by the microphone.
- the music signal is filtered with a filter characteristic based on the acquired listening environment characteristic information. As a result, music that more closely matches the external sound, reflecting the acoustic characteristics of the listening environment, is provided to the user.
- FIG. 5 is a schematic diagram illustrating a configuration example of the headphones according to the second embodiment.
- a headphone 200 includes a pair of housings 240 attached to a user's ears, and an arch-shaped support member 250 that connects the pair of housings 240 to each other.
- a driver unit 220b (speaker 220b) that is provided inside the housing 240 and generates sound by vibrating the diaphragm according to the music signal, and a speaker that outputs the music signal toward the listening environment that is an external space 220a, and a microphone 210 that is provided outside the housing 240 and picks up external sound.
- the headphones 200 are equipped with various configurations of general headphones, such as cables for supplying music signals to the speakers 220a and 220b, for example. It's okay.
- the microphone 210 is a microphone provided for a noise canceling function by a so-called feedforward method.
- a noise cancellation signal may be generated based on the external sound collected by the microphone 210. Further, the external sound collected by the microphone 210 may be used for the monitor function.
- a monitor signal can be generated based on the external sound collected by the microphone 210.
- the music signal on which the noise cancellation signal is superimposed is output from the speakers 220a and 220b, so that music with reduced noise is provided to the user.
- the music signal on which the monitor signal is superimposed is output from the speakers 220a and 220b, so that external sound is provided to the user together with the music.
- a transfer function of the user's listening environment is calculated as listening environment characteristic information based on the external sound collected by the microphone 210.
- a transfer function can be obtained by outputting a music signal from the speaker 220a toward the listening environment and using the music signal as a measurement audio signal (measurement signal).
- a measurement audio signal measurement signal
- a correlation function between the output music signal and the collected sound signal can be acquired as listening environment characteristic information.
- the autocorrelation function of the collected sound signal can be acquired as the listening environment characteristic information.
- Filtering based on the acquired transfer function or correlation function is performed on the music signal, so that the user can be provided with music that is more familiar with external sound in consideration of the acoustic characteristics of the external environment, It is possible to give a sense of openness to the user.
- FIG. 6 is a block diagram illustrating a configuration example of an acoustic adjustment system according to the second embodiment.
- the acoustic adjustment system 20 according to the second embodiment includes a microphone 210, speakers 220 a and 220 b, and a control unit 230.
- the acoustic adjustment system 20 according to the second embodiment is different from the acoustic adjustment system 10 according to the first embodiment shown in FIG. This corresponds to a case where the functions of the information acquisition unit 131 and the music signal processing unit 132 are changed. Therefore, in the following description of the configuration of the acoustic adjustment system 20, differences from the acoustic adjustment system 10 according to the first embodiment will be mainly described, and detailed descriptions of overlapping items will be omitted. .
- the microphone 210 collects a sound and converts the sound into an electric signal, thereby acquiring a signal corresponding to the sound (that is, a sound collection signal).
- the microphone 210 corresponds to the microphone 210 shown in FIG.
- the microphone 210 collects predetermined measurement sound as external sound.
- the measurement voice includes voice related to a music signal output from the speaker 220a to the outside, and uncorrelated noise such as noise noise.
- the collected sound signal from the microphone 210 is appropriately amplified by the amplifier 211 and converted into a digital signal by the ADC 212, and then the listening environment characteristic information acquisition unit 231, the monitor signal generation unit 133, and the noise cancellation signal generation unit of the control unit 230 described later. It is input to 134.
- Speakers 220a and 220b output a sound corresponding to the sound signal by vibrating the diaphragm according to the sound signal.
- the speakers 220a and 220b correspond to the speakers 220a and 220b shown in FIG.
- the speaker 220b is provided inside the housing 240 and outputs a music signal reflecting the acoustic characteristics of the listening environment to the user's ear. From the speaker 220b, a noise cancellation signal and / or a monitor signal may be output superimposed on the music signal.
- the speaker 220a outputs a music signal toward an external space (that is, a listening environment).
- the music signal output from the speaker 220a may be a music signal before signal processing (for example, before performing filtering) provided from an external device (for example, various playback devices), for example.
- the music signal output from the speaker 220a only needs to have a known characteristic, and may be a music signal after signal processing. Similar to the speaker 120 according to the first embodiment, DACs 222a and 222b and amplifiers 221a and 221b are provided in front of the speakers 220a and 220b, respectively.
- the control unit 230 is configured by various processors such as a CPU and a DSP, for example, and executes various signal processes performed in the acoustic adjustment system 20.
- the control unit 230 includes a listening environment characteristic information acquisition unit 231, a music signal processing unit 232, a monitor signal generation unit 133, and a noise cancellation signal generation unit 134 as its functions.
- Each function of the control unit 230 can be realized by a processor constituting the control unit 230 operating according to a predetermined program.
- the processor constituting the control unit 230 may be mounted on the headphones 200 shown in FIG. 5 or an information processing device different from the headphones 200 shown in FIG. 5 (for example, a mobile terminal such as a smartphone carried by the user). It may be mounted on.
- the function of the control unit 230 may be executed by a processor of an information processing apparatus such as a server provided on a network (so-called cloud).
- a processor of an information processing apparatus such as a server provided on a network (so-called cloud).
- the processor constituting the control unit 230 is mounted on a portable terminal or server other than the headphone 200, the headphone 200 and the portable terminal or the server transmit and receive various kinds of information to each other, thereby adjusting the sound.
- Various processes in the system 20 may be performed.
- the functions of the monitor signal generation unit 133 and the noise cancellation signal generation unit 134 are the same as the functions of these configurations shown in FIG.
- the listening environment characteristic information acquisition unit 231 acquires listening environment characteristic information representing the acoustic characteristics of the listening environment based on the external sound collected by the microphone 210.
- the listening environment characteristic information acquisition unit 231 uses, as the listening environment characteristic information, the listening environment transfer function, the output music signal and the collected sound signal based on the measurement sound collected by the microphone 210. And / or an autocorrelation function of uncorrelated noise can be obtained.
- the listening environment characteristic information acquisition unit 131 provides the acquired listening environment characteristic information to the music signal processing unit 232. Similar to the listening environment characteristic information acquisition unit 131 according to the first embodiment, the listening environment characteristic information acquisition unit 231 acquires the listening environment characteristic information at the timing when the listening environment characteristic information acquisition condition is detected. Can start. The function of the listening environment characteristic information acquisition unit 231 will be described in detail below (2-3. About the listening environment characteristic information acquisition unit).
- the music signal processing unit 232 performs predetermined signal processing on the music signal based on the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 231.
- the music signal processing unit 232 performs filtering on the music signal based on the transfer function and the correlation function that can be acquired by the listening environment characteristic information acquisition unit 231.
- the music signal processing unit 232 filters the music signal with a filter having filter characteristics reflecting the transfer function and / or correlation function characteristics of the listening environment, for example, reverberation characteristics (for example, according to the external environment ( Initial reflection time, reverberation time, etc.) can be added to the music signal.
- the music signal processing unit 232 can impart frequency characteristics according to the external environment to the music signal based on the transfer function and / or correlation function of the listening environment, for example, using an equalizer.
- the music signal that has been subjected to signal processing by the music signal processing unit 232 is appropriately adjusted in gain by the variable amplifier 150a, then output from the speaker 220b via the DAC 222b and the amplifier 221b, and provided to the user.
- the music signal after the signal processing may be output to the speaker 220b in a state where a noise cancel signal and / or a monitor signal are superimposed by the adder 160, as shown in FIG.
- the function of the music signal processing unit 232 will be described in detail in (2-4. Music signal processing unit).
- the configuration of the acoustic adjustment system 20 according to the second embodiment has been described above.
- various signal processes in the acoustic adjustment system 20, particularly each process in the control unit 230 are performed by, for example, one processor or one information processing apparatus. Or may be executed by cooperation of a plurality of processors or a plurality of information processing apparatuses.
- achieve the acoustic adjustment system 20 which concerns on 2nd Embodiment is not limited to the structure shown in FIG. 6, You may be arbitrary.
- the sound adjustment system 20 illustrated in FIG. 2 may be configured as an integrated device, and the device may include an external device (playback device) that provides a music signal to the control unit 230.
- the listening environment characteristic information acquisition unit 231 performs, as listening environment characteristic information, based on the measurement sound collected by the microphone 210, the transfer function of the listening environment, the output music signal, and the collected sound signal. A correlation function and / or an autocorrelation function of uncorrelated noise can be obtained.
- the listening environment characteristic information acquisition unit 231 can have different configurations depending on the acquired listening environment characteristic information.
- (2-3-1 Configuration for acquiring a transfer function using a music signal as a measurement signal
- listening environment characteristic information acquiring unit 231 Configuration for acquiring correlation function using uncorrelated noise as measurement signal
- the configuration of listening environment characteristic information acquiring unit 231 according to the acquired listening environment characteristic information will be described. 7, 8, and 10 below, for the sake of convenience, different symbols (listening environment characteristics) are used for the listening environment characteristic information acquisition unit 231 in order to describe different configurations in the listening environment characteristic information acquisition unit 231.
- Information acquisition units 231a, 231b, and 231c) are attached, and these all correspond to the listening environment characteristic information acquisition unit 231 shown in FIG.
- FIG. 7 is a block diagram illustrating a configuration example for acquiring a transfer function using a music signal as a measurement signal in the listening environment characteristic information acquisition unit 231.
- the listening environment characteristic information acquisition unit 231a includes a music signal characteristic calculation unit 261 and a reverberation characteristic estimation unit 262 as its functions. 7 illustrates the functional configuration of the listening environment characteristic information acquisition unit 231a, and the configuration related to each function of the listening environment characteristic information acquisition unit 231 in the configuration of the acoustic adjustment system 20 illustrated in FIG. It is extracted and shown together.
- a music signal is output as a measurement signal from the speaker 220 a, and the music signal is collected by the microphone 210.
- a parameter representing a music signal used for measurement is S
- a transfer function combining the microphone 210, the amplifier 211, and the ADC 212 is M
- a transfer function combining the DAC 222a, the amplifier 221a, and the speaker 220a is D.
- the transfer functions M and D are both known values that can be determined at the time of design.
- Ha be the transfer function until the music signal output from the speaker 220a reaches the microphone 210 through the space where the user is (that is, the listening environment). Furthermore, transfer of the function Ha, H 1 a component corresponding to the path that directly waves arrive with no reflection by the wall of a room or the like from the speaker 220a to the microphone 210, the components other than H 1 of the transfer function Ha H 2 And The transfer function H 1 represents a component which is not influenced by the listening environment. On the other hand, the transfer function H 2 is changed according to the listening environment, the acoustic characteristics of the listening environment represents the reflection components. H 1 is a known value that can be acquired in advance, for example, by measuring it in an anechoic room or the like when designing the headphones 200. At this time, Ha, H 1 and H 2 satisfy the following mathematical formula (3).
- a music signal is input as a measurement signal from an external device (playback device) to the music signal characteristic calculation unit 261.
- the music signal characteristic calculation unit 261 buffers and Fourier-transforms the music signal with a predetermined frame length in response to a measurement start trigger. Thereby, the music signal characteristic calculation unit 261 obtains the parameter S (Source) representing the above-described music signal.
- the music signal characteristic calculation unit 261 provides the acquired parameter S to the reverberation characteristic estimation unit 262. Further, the music signal characteristic calculation unit 261 provides the music signal to the DAC 222a.
- the music signal is output from the speaker 220a via the DAC 222a and the amplifier 221a.
- the music signal output from the speaker 220a is collected by the microphone 210.
- the music signal collected by the microphone 210 (ie, the collected sound signal) is input to the reverberation characteristic estimation unit 262 via the amplifier 211 and the ADC 212.
- the reverberation characteristic estimation unit 262 buffers and collects the collected sound signal with the same frame length as the music signal characteristic calculation unit 261 in response to a trigger for starting measurement.
- a signal obtained as a result of the calculation by the reverberation characteristic estimation unit 262 can be expressed as “M * D * S * Ha”.
- the transfer function Ha can be expressed as the following equation (4).
- the transfer function H 2 can be expressed as the following formula (5).
- the transfer functions H 1 , A, and D are known values.
- the parameter S is calculated by the music signal characteristic calculator 261. Accordingly, the reverberation characteristic estimation unit 262, by performing the calculation shown in the equation (5) using these known values, it is possible to calculate the transfer function of H 2 listening environment.
- the reverberation characteristic estimation unit 262 provides the calculated listening space transfer function H 2 to the music signal processing unit 232.
- the music signal processing unit 232 so that the filtering of the various is performed on the music signal by using a transfer function of H 2 listening space.
- the trigger for the music signal characteristic calculation unit 261 and the reverberation characteristic estimation unit 262 to start buffering the music signal and the collected sound signal may be that the listening environment characteristic information acquisition condition has been detected.
- the example of 1 structure for acquiring a transfer function using a music signal as a measurement signal was demonstrated.
- it has been calculated transfer function H 2 using a spoken voice of a user. Since the characteristic of the uttered voice is unknown, as described in the above (1-3. About listening environment characteristic information acquisition unit), in the first embodiment, the transfer function H is used without parameterizing the sound source. 2 was applied.
- the characteristic of the speaker 220a transfer function D as described above
- it can determine the transfer function H 2 using a sound source which is parameterized Become.
- the transfer function H 2 of the listening space is used by using various methods depending on the configuration of the headphones 100 and 200 (more specifically, the number and positions of the speakers 120, 220a, and 220b and the microphones 110 and 210). Can be calculated.
- FIG. 8 is a block diagram illustrating a configuration example for acquiring a correlation function using a music signal as a measurement signal in the listening environment characteristic information acquisition unit 231.
- the listening environment characteristic information acquisition unit 231b includes an output signal buffer unit 271, a sound collection signal buffer unit 272, and a correlation function calculation unit 273 as its functions. 8 illustrates the functional configuration of the listening environment characteristic information acquisition unit 231b, and the configuration related to each function of the listening environment characteristic information acquisition unit 231 in the configuration of the acoustic adjustment system 20 illustrated in FIG. It is extracted and shown together.
- a music signal is output as a measurement signal from the speaker 220 a, and the music signal is collected by the microphone 210. Then, a correlation function between the output music signal and the collected music signal (that is, the collected sound signal) is calculated.
- the correlation function can be said to reflect the acoustic characteristics of the listening environment.
- a music signal input from an external device is output from the speaker 220a via the DAC 222a and the amplifier 221a.
- the output signal buffer unit 271 buffers the music signal for a predetermined time by a measurement start trigger.
- the output signal buffer unit 271 provides the buffered music signal to the correlation function calculation unit 273.
- the music signal output from the speaker 220a is collected by the microphone 210.
- the collected sound signal from the microphone 210 is input to the collected sound signal buffer unit 272 via the amplifier 211 and the ADC 212.
- the sound collection signal buffer unit 272 buffers the sound collection signal at the same timing and the same time as the music signal buffer by the output signal buffer unit 271 in synchronization with the output signal buffer unit 271.
- the collected sound signal buffer unit 272 provides the buffered collected sound signal to the correlation function calculation unit 273.
- the correlation function calculation unit 273 calculates a correlation function between the music signal at the time of output buffered by the output signal buffer unit 271 and the music signal at the time of sound collection buffered by the sound collection signal buffer unit 272.
- An example of a correlation function that can be calculated by the correlation function calculator 273 is shown in FIG.
- FIG. 9 is a schematic diagram illustrating an example of a correlation function that can be calculated by the correlation function calculation unit 273. As shown in FIG. 9, the correlation function peaks at times t 1 , t 2 , t 3 ,..., T n and at predetermined times.
- Peak appearing at time t 1 shows a component transmitted directly from the speaker 220a to the microphone 210, the peak appearing at time t 2 later, after being output from the speaker 220a, and reflected on a wall or ceiling or the like of the listening environment
- the components input to the microphone 210 from FIG. Component corresponding to the peak appearing at time t 2 later exponentially decays over time, approaches zero. From the time up to the time t n and the slope of attenuation, the time until the initial reflected sound, the reverberation time, etc., which are the main factors of the reverberation characteristics, can be estimated.
- the correlation function calculation unit 273 provides the calculated correlation function to the music signal processing unit 232.
- the music signal processing unit 232 for example, the reverberation characteristics of the listening environment as described above are estimated from the correlation function, and various kinds of filter processing are performed on the music signal using the estimated reverberation characteristics.
- the trigger for the output signal buffer unit 271 and the collected sound signal buffer unit 272 to start buffering the music signal and the collected signal may be that the listening environment characteristic information acquisition condition has been detected.
- a correlation function of music signals before and after output may be acquired instead of a transfer function.
- the music signal is used as the measurement signal, but the second embodiment Is not limited to such an example.
- the measurement sound for example, a dedicated sound whose frequency band, volume level, and the like are adjusted for measurement may be used. For example, more stable characteristic information can be obtained by using a dedicated measurement sound having a sufficient frequency band and volume level.
- a dedicated measurement sound in which the frequency band and volume level are appropriately adjusted.
- FIG. 10 is a block diagram illustrating a configuration example for acquiring a correlation function using uncorrelated noise as a measurement signal in the listening environment characteristic information acquisition unit 231.
- the listening environment characteristic information acquisition unit 231c has an autocorrelation function calculation unit 281 as its function. 10 illustrates the functional configuration of the listening environment characteristic information acquisition unit 231c, and the configuration related to each function of the listening environment characteristic information acquisition unit 231 in the configuration of the acoustic adjustment system 20 illustrated in FIG. It is extracted and shown together.
- uncorrelated noise external sound including uncorrelated noise (uncorrelated noise) is collected by the microphone 210. Then, an autocorrelation function is calculated for the uncorrelated noise collected.
- the collected uncorrelated noise includes components that reflect the acoustic characteristics of the listening environment, such as reverberation components, so the autocorrelation function can be said to reflect the acoustic characteristics of the listening environment. .
- external sound including uncorrelated noise is collected by the microphone 210.
- the music signal collected by the microphone 210 ie, the collected sound signal
- the autocorrelation function calculator 281 buffers the collected sound signal for a predetermined time and calculates an autocorrelation function in response to a measurement start trigger.
- the autocorrelation function Rx ( ⁇ ) of the noise itself becomes 1 at time 0 and becomes 0 at other times.
- the noise source of uncorrelated noise is x (t) and the collected signal is y (t)
- the mutual function of x (t) and y (t) is the autocorrelation function Rx of the noise that is the input signal. It is expressed by convolution of ( ⁇ ) and a spatial impulse response.
- Rx ( ⁇ ) becomes a delta function, and an autocorrelation function of an impulse response can be obtained as an autocorrelation function of y (t).
- the autocorrelation function calculation unit 281 repeatedly performs the above autocorrelation function calculation a plurality of times. Then, an autocorrelation function to be finally adopted is determined based on the calculation result. For example, the autocorrelation function calculation unit 281 can employ, as the autocorrelation function, one having a good S / N ratio among the plurality of calculated autocorrelation functions. Further, for example, the autocorrelation function calculation unit 281 can employ an average value of a plurality of calculated autocorrelation functions as the autocorrelation function.
- the autocorrelation function calculation unit 281 extracts a common component of the calculated plurality of autocorrelation functions, and when the sound collection signal includes a pitch component, the autocorrelation function calculation unit 281 calculates based on the sound collection signal.
- the autocorrelation function to be finally adopted can be determined based on the remaining autocorrelation functions by excluding the autocorrelation function.
- the autocorrelation function calculation unit 281 provides the music signal processing unit 232 with the autocorrelation function finally determined to be adopted.
- the music signal processing unit 232 for example, reverberation characteristics of the listening environment are estimated from the autocorrelation function, and various filter processes are performed on the music signal using the estimated reverberation characteristics.
- the trigger for the autocorrelation function calculation unit 281 to start calculating the buffer of the collected sound signal and the autocorrelation function may be that the listening environment characteristic information acquisition condition has been detected.
- the correlation function representing the acoustic characteristics of the listening environment can be measured using uncorrelated noise as a measurement signal. Therefore, it is not necessary to output a measurement signal such as a music signal, and the listening environment characteristic information can be acquired more easily.
- uncorrelated noise is used for the measurement signal, it is possible to acquire the acoustic characteristics of the listening environment using not only the autocorrelation function but also the cross spectrum method in the frequency domain.
- the method for obtaining the correlation function based on the uncorrelated noise described above can be executed if a microphone capable of collecting external sound is provided, and outputs a measurement signal toward the outside. There is no need. Therefore, even if the headphone 100 according to the first embodiment shown in FIG. 1 does not have a speaker that outputs sound toward the outside, it can collect external sound like the microphone 110a. If it has an easy structure, the method of acquiring a correlation function based on the uncorrelated noise mentioned above can be performed.
- FIG. 11 is a schematic diagram illustrating an example of a correlation function that can be acquired by the listening environment characteristic information acquisition unit 231.
- FIG. 12 is a block diagram illustrating an example of a functional configuration of the music signal processing unit 232.
- FIG. 13 is a block diagram illustrating a configuration example of the reverberation component adding unit 293 included in the music signal processing unit 232.
- the music signal processing unit 232 obtains a correlation function using the above (2-3-2. Music signal as a measurement signal. Or a reverberation time, initial reflection time, reverberation based on the correlation function obtained by the method described in (2-3-3. Configuration for obtaining correlation function using uncorrelated noise as measurement signal). A case will be described in which the ratio and frequency characteristics of sound are estimated and parameterized and reflected in the music signal.
- the music signal processing unit 232 may be configured to have other functions.
- the music signal processing unit 232 described above based on the transfer function H 2 obtained by the method described in (2-3-1.
- the music signal processing unit 232 may be configured by an FIR filter in the same manner as the music signal processing unit 132 according to the first embodiment described in (1-4. About the music signal processing unit). The FIR filter is used to filter the music signal based on the transfer function H 2 obtained by the method described in (2-3-1. Transfer function is obtained using the music signal as a measurement signal). May be.
- the music signal processing unit 132 according to the first embodiment may have the configuration shown in FIG. In that case, the music signal processing unit 132 estimates and parameterizes the above-described characteristics based on the transfer function H 2 acquired by the method described in the first embodiment, and uses the configuration shown in FIG. Each characteristic may be reflected in the music signal
- the reverberation of the listening environment such as reverberation time and initial reflection time is obtained from the obtained correlation function.
- Characteristics can be estimated.
- FIG. 9 An example when the correlation function shown in FIG. 9 is measured for a longer time is shown in FIG.
- FIG. 11 from the correlation function measured by the listening environment characteristic information acquisition unit 231, a component corresponding to the direct sound, a component corresponding to the initial reflected sound, and a component corresponding to the reverberant sound can be observed.
- various acoustic characteristics such as reverberation time, initial reflection time, ratio of reverberation sound (rear reverberation sound), and frequency characteristics are estimated and parameterized from the characteristics of the correlation function.
- FIG. 12 shows an example of a functional configuration of the music signal processing unit 232 according to the second embodiment.
- the music signal processing unit 232 includes a parameter generation unit 291, an EQ unit 292, and a reverberation component addition unit 293 as functions thereof.
- 12 illustrates the functional configuration of the music signal processing unit 232, and extracts the configuration related to each function of the music signal processing unit 232 from the configuration of the acoustic adjustment system 20 illustrated in FIG. It is shown.
- the parameter generation unit 291 generates parameters representing various acoustic characteristics such as reverberation time, initial reflection time, ratio of rear reverberation, and frequency characteristics based on the correlation function measured by the listening environment characteristic information acquisition unit 231. .
- the EQ unit 292 is configured by, for example, an equalizer, and adjusts the frequency characteristic of the music signal based on the parameter relating to the frequency characteristic generated from the correlation function by the parameter generation unit 291.
- the reverberation component adding unit 293 is configured by, for example, an IIR (Infinite Impulse Response) filter shown in FIG. Based on this, the reverberation characteristic of the listening environment is given to the music signal.
- FIG. 13 shows a configuration example of an IIR filter that can configure the reverberation component adding unit 293.
- the length of the delay line and the amplifier coefficient ER (ER1 to ERn) shown in FIG. 13 may reflect a parameter related to the initial reflection time generated from the correlation function.
- the parameters relating to the reverberation time generated from the correlation function are reflected in the coefficients g (g1 to g4) and the coefficients ⁇ ( ⁇ 1 to ⁇ 4) in the comb filters (Comb filter1 to Comb filter4) shown in FIG. obtain.
- the IIR filter reflecting various parameters to the music signal, the acoustic characteristics of the listening environment such as reverberation are artificially added to the music signal.
- the EQ is these parameters in the part 292 and the IIR filter need not be updated.
- the parameter generation unit 291 The parameters may be updated.
- the characteristic of the EQ unit 292 and the IIR filter is frequently changed by not updating the parameter. Therefore, music can be provided to the user more stably.
- the initial reflection time can be defined as the time T1 (eg, t 2 -t 1 shown in FIG. 9) between the first peak of the correlation function (direct sound) and the peak of the subsequent correlation function.
- the parameter generation unit 291 calculates the initial reflection time T1 from the correlation function and provides it to the reverberation component addition unit 293.
- the reverberation component adding unit 293 the length of the delay line and the coefficient ER shown in FIG. 13 are changed according to T1. Thereby, the characteristic of the initial reflection time of the listening environment can be reflected on the music signal.
- the coefficient ER a value obtained from a correlation function or an impulse response may be directly used.
- several types of values that can be applied as the coefficient ER are prepared in advance, and the characteristics obtained from the correlation function are closer to that. It is also possible to select and use.
- the reverberation time Tr can be estimated by performing Schroeder integration of the obtained correlation function to obtain an energy decay curve.
- Schroeder integration is shown in the following formula (6).
- ⁇ S 2 (t)> is a collective average of reverberation waveforms
- h (t) is a correlation function or impulse response acquired by the listening environment characteristic information acquisition unit 231.
- the parameter generation unit 291 can obtain the attenuation curve of the energy of the reverberation component by performing the calculation shown in the above mathematical formula (6).
- An example of the attenuation curve of the energy of the reverberation component calculated by the parameter generation unit 291 is shown in FIG.
- FIG. 14 is a diagram illustrating an example of an attenuation curve of reverberation component energy.
- the reverberation time Tr is defined as the time when the sound energy in the measurement environment becomes ⁇ 60 (dB).
- the energy decreases by ⁇ 30 (dB) at 1 (sec) (that is, the energy decay curve has a slope of ⁇ 30 (dB / sec)), and thus is estimated.
- the reverberation time Tr is 2 (sec). For example, when the listening environment is a relatively large indoor space such as a music hall, the reverberation time is considered to be longer.
- the delay and gain in the filter are changed in accordance with the reverberation time Tr obtained by the parameter generation unit 291.
- the coefficient g and the coefficient ⁇ in the comb filter can be changed using the reverberation time Tr.
- the parameter generation unit 291 can calculate the coefficient g and the coefficient ⁇ in the comb filter based on the reverberation time Tr obtained from the correlation function.
- the coefficient g, the coefficient ⁇ , and the reverberation time Tr are in the relationship shown in the following formula (7).
- the parameter generation unit 291 applies a combination of the coefficient g and the coefficient ⁇ such that the left side of the equation (7) is 2 to each comb filter.
- the coefficient g of each comb filter is a value that satisfies the following equation (8).
- the parameter generation unit 291 provides the coefficient g obtained in this way and the coefficient ⁇ set as a fixed value to the reverberation component adding unit 293.
- the value calculated by the parameter generating unit 291 is applied to the coefficient g and the coefficient ⁇ in the comb filter shown in FIG. Can be reflected.
- D value is a value indicating the ratio of the initial energy (within 50 ms) to the energy of the entire sound, and is represented by the following mathematical formula (9).
- h (t) is a correlation function or impulse response acquired by the listening environment characteristic information acquisition unit 231.
- the value representing the ratio of the initial energy of the reverberation and the total energy, such as the D value, obtained from the measurement result, and the initial component of the reverberation and the rear reverberation component provided by the IIR filter correspond to each other.
- characteristics such as DRY gain, ER gain, Reverb gain, and WET gain shown in FIG. 13, the proportion of the reverberant sound in the listening environment can be reflected in the music signal.
- the parameter generation unit 291 calculates a D value from the formula (9), and based on the D value, sets parameters related to characteristics such as DRY gain, ER gain, Reverb gain, and WET gain that satisfy the above conditions. Can be calculated.
- the parameter generation unit 291 provides the parameters obtained in this way to the reverberation component adding unit 293.
- the parameters calculated by the parameter generating unit 291 are applied to the DRY gain, ER gain, Reverb gain, and WET gain shown in FIG. The characteristics of the sound ratio can be reflected.
- the parameter generation unit 291 estimates the frequency characteristic of the listening environment from the correlation function acquired by the listening environment characteristic information acquisition unit 231, generates a parameter that can reflect the frequency characteristic, and provides the parameter to the EQ unit 292. be able to.
- the EQ unit 292 reflects the frequency characteristics of the listening environment on the music signal. For example, when high frequency attenuation is observed in the estimated frequency characteristics of the listening environment, the EQ unit 292 can execute processing for attenuating the high frequency of the music signal.
- the frequency characteristics of music radiated to a space vary depending on the transfer function of the space. Therefore, for example, the frequency characteristic of the listening environment is acquired from the transfer function H 2 acquired by the method described in (2-3-1. Configuration for acquiring a transfer function using a music signal as a measurement signal). It is also possible to reflect the frequency characteristics on the music signal using the IIR filter shown in FIG. In this case, for example, the parameter generation unit 291, a transfer function H 2 by Fourier transform, it is possible to obtain a parameter for reflecting the frequency-amplitude characteristics in the IIR filter. By appropriately setting the parameter acquired by the parameter generation unit 291 for the IIR filter, it is possible to give a frequency characteristic that simulates the characteristics of the listening environment to the music signal.
- the transfer function H 2 acquired by the method described in the above (2-3-1. Configuration for acquiring transfer function using music signal as measurement signal) is shown in FIG. It is also possible to add frequency characteristics to the music signal by convolving the music signal with the FIR filter shown in FIG.
- the function of the music signal processing unit 232 according to the second embodiment has been described above.
- values calculated from the transfer function and / or correlation function are not used as they are, but some of the values prepared in advance are not used. A value closer to the calculated value may be selected.
- the parameter and the relationship between the category and the parameter are stored as a table in a storage unit (not shown in FIG. 6) provided in the acoustic adjustment system 20, for example.
- the parameter generation unit 291 determines a category corresponding to the listening environment from the characteristics of the transfer function and / or correlation function acquired by the listening environment characteristic information acquisition unit 231.
- the parameter generation unit 291 can select a parameter according to the listening environment by referring to a table indicating the relationship between the category and the parameter stored in the storage unit.
- the transfer function and / or correlation function of the listening space is acquired based on the predetermined measurement sound, and the music signal is acquired based on the transfer function and / or the correlation function.
- the music signal is acquired based on the transfer function and / or the correlation function.
- a more open music that is more familiar to external sounds can be provided to the user.
- Various voices such as music signals and noise can be used as measurement voices. Therefore, for example, in an environment where it is difficult to output music to the outside, the listening environment characteristic information is acquired by using an appropriate measurement sound according to the listening environment, such as using noise for noise measurement or inaudible band measurement sound. Measurement can be performed in various listening environments.
- FIG. 15 is a flowchart illustrating an example of a processing procedure of the information processing method according to the first and second embodiments.
- the listening environment described above (2-3-1. Configuration for acquiring transfer function using music signal as measurement signal) It may be performed in the sound adjustment system 20 according to the second embodiment having the characteristic information obtaining unit 231a, an information processing method in a case where using a music signal as a measurement signal, the transfer function H 2 is obtained as the listening environment characteristic information Will be described.
- the information processing method according to the first and second embodiments is not limited to such an example, and as described above as the first and second embodiments, as the measurement signal, the user's uttered voice or uncorrelated Noise may be used, and a correlation function may be acquired as the listening environment characteristic information.
- a music signal is output toward the listening environment (step S101).
- the process shown in step S101 can be executed, for example, by driving the speaker 220a under the control of the control unit 230 shown in FIG. Note that, as described above (1. First embodiment) and (2-3-3. Configuration for obtaining correlation function using uncorrelated noise as measurement signal), a music signal is used as the measurement signal. When not used, the process shown in step S101 may be omitted.
- step S103 it is determined whether or not the listening environment characteristic information acquisition condition has been detected.
- the process shown in step S103 and step S105 described below can be executed by the listening environment characteristic information acquisition unit 231 shown in FIG. 6, for example. If it is 1st Embodiment, the process shown to step S103, S105 may be performed by the listening environment characteristic information acquisition part 131 shown in FIG. 2, for example.
- the listening environment characteristic information acquisition condition for example, power-on to the control unit 230, a prescribed timer count, or the like can be detected.
- the processor constituting the control unit 230 is mounted on a portable terminal different from the headphones 200
- the listening environment characteristic information acquisition condition is, for example, that a user's movement is detected by a sensor mounted on the portable terminal. Or a case where an operation input to the portable terminal is detected.
- the user's utterance may be detected as the listening environment characteristic information acquisition condition.
- step S103 If it is determined in step S103 that the listening environment characteristic information acquisition condition is not detected, the process does not proceed to the subsequent processing, but waits until the listening environment characteristic information acquisition condition is detected. On the other hand, if it is determined in step S103 that the listening environment characteristic information acquisition condition has been detected, the process proceeds to step S105.
- step S105 listening environment characteristic information is acquired.
- the transfer function is calculated using the above (2-3-1. Music signal as a measurement signal). by the method described with reference to obtaining configuration), the transfer function of H 2 listening environment is calculated. If the first embodiment, in the process shown in step S105, based on the sound signals picked up by the microphone 110 in accordance with the speech of the user, the transfer function of H 2 listening environment it is calculated. In the above (2-3-2. Configuration for acquiring correlation function using music signal as measurement signal) and (2-3-3. Configuration for acquiring correlation function using uncorrelated noise as measurement signal) As described, a correlation function may be acquired as the listening environment characteristic information.
- step S107 parameters for correcting the music signal are calculated based on the acquired listening environment characteristic information.
- the processing shown in step S107 and steps S109 and S111 described later can be executed by, for example, the music signal processing unit 232 shown in FIG.
- the characteristics of the EQ unit 292 and the reverberation component adding unit 293 that is, the IIR filter shown in FIG. 12 as described in (2-4. About the music signal processing unit) are determined.
- the parameters to be calculated are calculated.
- the processes shown in steps S107, S109, and S111 can be executed by, for example, the music signal processing unit 132 shown in FIG.
- the process shown in step S107 for example, parameters that determine the characteristics of the FIR filter shown in FIG. 4 as described in (1-4. Music signal processing unit) are calculated. .
- step S109 it is determined whether the calculated parameter is sufficiently different from the current set value.
- step S109 for example, the difference between the current parameter set in the EQ unit 292 and / or the IIR filter described above and the new parameter obtained by the current measurement is compared with a predetermined threshold value. Is done. In the first embodiment, the same processing is performed on the FIR filter.
- step S109 If it is determined that the parameter calculated in step S109 is not significantly different from the current set value, the process returns to step S103 without proceeding to the subsequent processing. This is because if the characteristics of the EQ unit 292, the IIR filter, and the FIR filter are frequently changed, the music signal may fluctuate, which may impair the user's sense of listening. On the other hand, if it is determined that the parameter calculated in step S109 is sufficiently different from the current set value, the process proceeds to step S111.
- step S111 the parameters of the EQ unit 292 and / or the IIR filter are updated using the parameters calculated in step S107.
- the parameters of the FIR filter are updated using the parameters calculated in step S107, and the acoustic characteristics of the listening environment are reflected on the music signal by the FIR filter.
- the reverberation characteristics and frequency characteristics of the listening environment are given to the music signal as the acoustic characteristics of the listening environment.
- 1st and 2nd embodiment is not limited to this example, The other acoustic characteristic of listening environment may be provided with respect to a music signal.
- a modification in which the sound pressure of the music signal is adjusted according to the listening environment will be described.
- FIG. 16 is a block diagram illustrating a configuration example of an acoustic adjustment system according to a modification example in which the sound pressure is adjusted.
- FIG. 17 is a block diagram illustrating an example of a functional configuration of a music signal processing unit according to a modification in which the sound pressure is adjusted.
- the acoustic adjustment system shown in FIG. 16 corresponds to the acoustic adjustment system 10 according to the first embodiment shown in FIG. 2 in which the function of the music signal processing unit 132 is changed, and has other configurations.
- the function is the same as that of the acoustic adjustment system 10. Therefore, in the following description of the acoustic adjustment system according to the present modification, differences from the acoustic adjustment system 10 according to the first embodiment will be mainly described, and detailed descriptions will be provided for overlapping items. Omitted.
- the acoustic adjustment system 30 includes a microphone 110, a speaker 120, and a control unit 330.
- the functions of the microphone 110 and the speaker 120 are the same as those of these components shown in FIG.
- the control unit 330 is configured by various processors such as a CPU and a DSP, for example, and executes various signal processes performed in the acoustic adjustment system 30.
- the control unit 330 includes a listening environment characteristic information acquisition unit 131, a music signal processing unit 332, a monitor signal generation unit 133, and a noise cancellation signal generation unit 134 as its functions.
- Each function of the control unit 330 can be realized by a processor constituting the control unit 330 operating according to a predetermined program.
- the functions of the listening environment characteristic information acquisition unit 131, the monitor signal generation unit 133, and the noise cancellation signal generation unit 134 are the same as the functions of these configurations shown in FIG. .
- a sound collection signal by the microphone 110 is input to the music signal processing unit 332 together with the music signal. Further, the variable amplifier 150a provided for the music signal and the variable provided for the monitor signal according to the sound pressure ratio between the sound pressure of the sound related to the music signal calculated by the music signal processing unit 332 and the sound pressure of the external sound. The gain of the amplifier 150b is adjusted.
- FIG. 17 shows an example of the functional configuration of the music signal processing unit 332.
- the music signal processing unit 332 includes an FIR filter 351 and a sound pressure ratio calculation unit 352 as its functions.
- the functional configuration of the music signal processing unit 332 is illustrated, and the configuration related to each function of the music signal processing unit 332 is extracted from the configuration of the acoustic adjustment system 30 illustrated in FIG. It is shown.
- the FIR filter 351 corresponds to the FIR filter according to the first embodiment shown in FIG. 4 (that is, the music signal processing unit 132 shown in FIG. 2). Since the function of the FIR filter 351 is the same as the function of the FIR filter (music signal processing unit 132) according to the first embodiment, detailed description thereof is omitted. Thus, it can be said that the music signal processing unit 332 according to this modification has both the function of the music signal processing unit 132 according to the first embodiment and the function of the sound pressure ratio calculation unit 352.
- the sound pressure ratio calculation unit 352 analyzes the sound pressure of the music signal and the sound pressure of the collected sound signal (ie, the sound pressure of the external sound), and the sound pressure of the sound related to the music signal and the external sound (ie, the monitor sound).
- the sound pressure of the music signal and the sound pressure of the signal related to the external signal are calculated so that the sound pressure ratio with the sound pressure becomes an appropriate value. For example, when the external sound is excessively high, the sound pressures of both are calculated so as to relatively reduce the sound pressure of the external sound. In this case, the sound pressure of the music signal may be increased, or the sound pressure of the monitor signal may be decreased. Thereby, the situation where music is buried in external sound is prevented.
- the sound pressure ratio is calculated so as to relatively reduce the sound pressure of the sound related to the music signal.
- the sound pressure of the music signal may be decreased, or the sound pressure of the monitor signal may be increased. Thereby, the situation where music leaks outside the headphones 100 is prevented.
- an appropriate value may be set in advance by a designer or the like of the sound adjustment system 40, or a value may be appropriately set by the user depending on the situation.
- the parameter calculated by the sound pressure ratio calculation unit 352 is reflected in the gain of the variable amplifier 150a provided for the music signal and the variable amplifier 150b provided for the monitor signal. Thereby, the sound pressure ratio between the music signal and the monitor signal corresponding to the external sound is appropriately controlled.
- the configuration of the sound adjustment system according to this modification has been described above with reference to FIGS. 16 and 17.
- the sound pressure ratio between the sound related to the music signal and the external sound is automatically adjusted according to the external sound in the listening environment. . Therefore, music and external sound are provided with a more comfortable volume balance by the user, and the convenience for the user can be improved.
- the sound pressure of the external sound is calculated by the sound pressure ratio calculation unit 352 of the music signal processing unit 332, but the present modification is not limited to such an example.
- the sound pressure of the external sound may be calculated by analyzing the collected sound signal by the listening environment characteristic information acquisition unit 131 as a part of the listening environment characteristic information.
- the listening environment characteristic information is acquired based on the external sound collected by the microphones 110 and 210 every time the listening environment characteristic information acquisition condition is detected.
- the first and second embodiments are not limited to this example.
- the listening environment characteristic information for each place (that is, for each listening environment) is associated with the position information of the place and is made into a database (DB).
- the listening environment characteristic information acquisition unit may acquire the listening environment characteristic information of a place corresponding to the current position of the user from the DB.
- FIG. 18 is a block diagram illustrating a configuration example of an acoustic adjustment system according to the present modification. 18 is different from the acoustic adjustment system 10 according to the first embodiment shown in FIG. 1 in that a communication unit 170, a portable terminal 50, and a listening environment characteristic information DB 60 described later are included. Corresponding to what has been added, the functions of other configurations are the same as those of the acoustic adjustment system 10. Therefore, in the following description of the acoustic adjustment system according to the present modification, differences from the acoustic adjustment system 10 according to the first embodiment will be mainly described, and detailed descriptions will be provided for overlapping items. Omitted.
- the acoustic adjustment system 40 includes a microphone 110, a speaker 120, a control unit 130, a communication unit 170, a portable terminal 50, and a listening environment characteristic information DB 60.
- the functions of the microphone 110, the speaker 120, and the control unit 130 are the same as the functions of these configurations shown in FIG. 1, detailed descriptions thereof are omitted.
- the communication unit 170 is configured by a communication device that can transmit and receive various types of information to and from an external device.
- a communication device that can function as the communication unit 170 is mounted on the headphones 100 illustrated in FIG. 1.
- the communication unit 170 can transmit and receive various types of information to and from the mobile terminal 50.
- the communication between the communication unit 170 and the portable terminal 50 may be wireless communication based on a communication method such as Bluetooth (registered trademark), or may be wired communication.
- the communication unit 170 transmits the listening environment characteristic information acquired based on the collected sound signal by the listening environment characteristic information acquisition unit 131 of the control unit 130 to the portable terminal 50.
- the communication unit 170 can receive the listening environment characteristic information of the place corresponding to the current position of the user from the portable terminal 50 and can provide it to the listening environment characteristic information acquisition unit 131.
- the portable terminal 50 is an information processing apparatus carried by the user, such as a smartphone or a tablet PC (Personal Computer).
- the portable terminal 50 has a communication unit 510 and a position detection unit 520 as its functions.
- the portable terminal 50 may further have various functions which portable terminals, such as a common smart phone and bullet PC, have.
- the mobile terminal 50 can include a configuration such as a control unit that performs various signal processing and controls the operation of the mobile terminal 50, and a storage unit that stores various types of information processed in the mobile terminal 50.
- the driving of the communication unit 510 and the position detection unit 520 described above can be controlled by a processor that constitutes the control unit operating according to a predetermined program. Note that, as described in (1-2. System Configuration) above, the control unit 130 may be realized as a function of the mobile terminal 50.
- the communication unit 510 is configured by a communication device that can transmit and receive various types of information to and from an external device.
- the communication unit 510 can transmit and receive various types of information to and from the communication unit 170.
- the communication unit 510 receives the listening environment characteristic information acquired from the collected sound signal by the listening environment characteristic information acquisition unit 131 of the control unit 130 and transmitted from the communication unit 170.
- the communication unit 510 uses the received listening environment characteristic information as position information detected by the position detection unit 520 (this corresponds to the current position information of the mobile terminal 50, that is, the current position information of the user).
- it transmits to listening environment characteristic information DB60.
- the communication unit 510 receives the listening environment characteristic information of the place corresponding to the current position of the user from the listening environment characteristic information stored in the listening environment characteristic information DB 60 and transmits it to the communication unit 170. .
- the position detection unit 520 is configured by a position detection sensor such as a GPS sensor, and detects the current position of the mobile terminal 50, that is, the current position of the user.
- the position detection unit 520 provides the communication unit 510 with the detected current position information of the user.
- the communication unit 510 associates the listening environment characteristic information acquired based on the collected sound signal by the listening environment characteristic information acquisition unit 131 of the control unit 130 with the current position information of the user.
- the listening environment characteristic information DB 60 can be transmitted.
- the listening environment characteristic information DB 60 is configured by a storage device capable of storing various types of information, such as a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the listening environment characteristic information DB 60 manages location information of a place (that is, listening environment) and listening environment characteristic information in the listening environment in association with each other.
- the listening environment characteristic information DB 60 may be installed, for example, on a network (so-called cloud), and the mobile terminal 50 can communicate with the listening environment characteristic information DB 60 via a communication network constructed according to various communication methods. it can.
- the control unit 130 may be realized as a function of an information processing apparatus such as a server provided on the cloud together with the listening environment characteristic information DB 60.
- the listening environment characteristic information DB 60 obtained by a plurality of users is collected and stored at any time in the listening environment characteristic information DB 60.
- the position detection unit 520 detects the current position of the user, and the position information of the current position is transmitted to the listening environment characteristic information DB 60.
- the listening environment characteristic information DB 60 it is searched based on the position introduction whether or not the listening environment characteristic information of the place corresponding to the current position of the user is stored in the listening environment characteristic information DB 60.
- the listening environment characteristic information of the place corresponding to the current position of the user is stored in the listening environment characteristic information DB 60
- the listening environment characteristic information is transmitted from the listening environment characteristic information DB 60 via the communication units 170 and 510. , And transmitted to the listening environment characteristic information acquisition unit 131 of the control unit 130.
- the music signal processing unit 132 filters the music signal using the listening environment characteristic information provided from the listening environment characteristic information acquisition unit 131, thereby changing the acoustic characteristic of the listening environment where the user is currently located with respect to the music signal. Can be granted.
- the listening environment characteristic information is obtained from the past history without performing a series of processes for acquiring the listening environment characteristic information based on the collected external sound. Can be acquired. Therefore, the process of calculating the transfer function, the correlation function, etc. can be omitted, and the configuration of the control unit 130 can be further simplified.
- the configuration of the acoustic adjustment system according to this modification has been described above with reference to FIG.
- a series of processes for collecting external sound and acquiring listening environment characteristic information based on the collected external sound by referring to the listening environment characteristic information DB 60 is performed. Even if not performed, it is possible to acquire listening environment characteristic information. Therefore, the process performed by the control unit 130 can be further simplified.
- parameters for determining the characteristics of the EQ unit 292 and the reverberation component adding unit 293 that is, the IIR filter
- parameters for determining the characteristics of the FIR filter shown in FIG. 12 may be stored in association with the information about the position information of the listening environment together with the transfer function and the correlation function. Accordingly, the music signal processing unit 132 can correct the music signal using the parameters stored in the listening environment characteristic information DB 60 without calculating the parameters for correcting the music signal. Therefore, the process of calculating these parameters can be omitted, and the configuration of the control unit 130 can be further simplified.
- the listening environment characteristic information DB 60 stores the statistical values (for example, average values) of the listening environment characteristic information of the place. It may be stored as listening environment characteristic information. Thereby, the precision of the listening environment characteristic information memorize
- the listening environment characteristic information acquisition unit 131 newly obtains the listening environment characteristic based on the external sound. Information may be acquired. Then, the listening environment characteristic information stored in the listening environment characteristic information DB 60 is compared with the newly acquired listening environment characteristic information. If the values are greatly different, the information in the listening environment characteristic information DB 60 is While being updated, the music signal may be filtered based on the newly acquired listening environment characteristic information. This is because, even in the same place, the listening environment characteristic information can also be changed by changing the surrounding environment, so that the latest listening environment characteristic information is considered to be more reliable.
- FIG. 19 is a schematic diagram illustrating a configuration example of the headphones according to the present modification.
- FIG. 20 is a block diagram illustrating a configuration example of an acoustic adjustment system according to this modification.
- the present modification is applied to the first embodiment.
- the present modification can be similarly applied to the second embodiment. is there.
- a headphone 100a includes a pair of housings 140L and 140R that are respectively attached to the left and right ears of a user, and an arch-shaped support member 180 that connects the housings 140L and 140R. Is provided.
- the headphones 100a are so-called overhead headphones.
- a pair of microphones 110a and 110b are provided on the outer and inner sides of the housings 140L and 140R, respectively.
- the user's voice is collected by the microphones 110a and 110b provided in the left and right cases 140L and 140R, respectively, and the listening environment characteristic information is acquired based on the left and right collected signals.
- the acoustic adjustment system 70 includes a left channel acoustic adjustment unit 10L (left channel acoustic adjustment unit 10L), a right channel acoustic adjustment unit 10R (right channel acoustic adjustment unit 10R), and listening.
- An environmental characteristic information integration unit 190 the configurations of the left channel acoustic adjustment unit 10L and the right channel acoustic adjustment unit 10R are the same as those of the acoustic adjustment system 10 according to the first embodiment shown in FIG. Accordingly, the detailed description of the configurations of the left channel sound adjustment unit 10L and the right channel sound adjustment unit 10R that have already been described in the first embodiment is omitted here.
- the listening environment characteristic information acquisition unit 131 of the left channel acoustic adjustment unit 10L is based on the user's uttered voice collected by the microphones 110a and 110b of the housing 140L attached to the left ear of the user.
- the listening environment characteristic information acquisition unit 131 of the right channel acoustic adjustment unit 10R is based on the user's uttered voice collected by the microphones 110a and 110b of the housing 140R attached to the right ear of the user. Get characteristic information.
- the microphone 110 shown in FIG. 20 corresponds to the microphones 110a and 110b shown in FIG. 19, and schematically shows these together.
- the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131 of the left channel sound adjustment unit 10L is also referred to as left channel listening environment characteristic information.
- the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131 of the right channel sound adjustment unit 10R is also referred to as right channel listening environment property information.
- the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131 is directly provided to the music signal processing unit 132 and converted into a music signal by the music signal processing unit 132, as in the first embodiment.
- a filtering process based on the listening environment characteristic information may be appropriately performed.
- the music signals of the left and right channels are independently corrected based on the listening environment characteristic information acquired by the left and right listening environment characteristic information acquisition units 131.
- the listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131 is not directly provided to the music signal processing unit 132, but the left channel listening environment characteristic information and the right channel listening environment characteristic information.
- the music signals of the left and right channels can be corrected using the listening environment characteristic information obtained by integrating.
- the left ch listening environment characteristic information acquired by the listening environment characteristic information acquisition unit 131 of the left ch sound adjustment unit 10L is not directly provided to the music signal processing unit 132, and the listening environment once It can be provided to the characteristic information integration unit 190.
- the right channel listening environment characteristic information acquired by the listening environment characteristic information acquiring unit 131 of the right channel sound adjusting unit 10R can be provided to the listening environment characteristic information integrating unit 190.
- the listening environment characteristic information integration unit 190 integrates the left channel listening environment characteristic information and the right channel listening environment characteristic information, and finally calculates the listening environment characteristic information used for correcting the music signal.
- the listening environment characteristic information integration unit 190 can calculate the integrated listening environment characteristic information by averaging the left channel listening environment characteristic information and the right channel listening environment characteristic information.
- the integration process performed by the listening environment characteristic information integration unit 190 is not limited to this example. The integration process only needs to calculate new listening environment characteristic information based on the left ch listening environment characteristic information and the right ch listening environment characteristic information.
- the left ch listening environment characteristic information and the right ch listening environment characteristic information Other processes such as multiplying each by a weighting factor and adding them may be used.
- the listening environment characteristic information integration unit 190 provides the calculated listening environment characteristic information to the music signal processing unit 132 of the left channel sound adjustment unit 10L and the right channel sound adjustment unit 10R, respectively.
- Each music signal processing unit 132 performs a filtering process on the music signal based on the integrated listening environment characteristic information. As described above, by integrating a plurality of pieces of listening environment characteristic information acquired independently of each other, more accurate listening environment characteristic information can be obtained. Further, by performing the filtering process on the music signal using the listening environment characteristic information after integration, it is possible to execute the filtering process that more reflects the characteristics of the listening environment.
- the listening environment characteristic information integration unit 190 calculates the left listening environment characteristic information and the right channel listening information when calculating the integrated listening environment characteristic information based on the left channel listening environment characteristic information and the right channel listening environment characteristic information.
- the listening environment characteristic information is not calculated, and the listening environment characteristic information may not be provided to the music signal processing unit 132, that is, the music signal processing unit 132 performs filter correction. It is not necessary to update the parameters.
- the left channel listening environment characteristic information and the right channel listening environment characteristic information are significantly different, at least one of the values is abnormal, and acquisition of the listening environment characteristic information by the left or right listening environment characteristic information acquisition unit 131 is obtained. It is conceivable that processing is not performed normally.
- the filter processing based on the abnormal listening environment characteristic information is executed by not updating the parameters in the music signal processing unit 132. Can be prevented.
- the listening environment characteristic information integration unit 190 determines the certainty of the acquired left channel listening environment characteristic information and right channel listening environment characteristic information based on the left channel listening environment characteristic information and the right channel listening environment characteristic information. You may judge.
- casing was demonstrated.
- the left channel listening environment characteristic information and the right channel listening environment are based on the collected sound signal collected by the microphone 110 provided in each of the pair of housings 140L and 140R.
- Each characteristic information is acquired.
- the music signal is filtered using the listening environment characteristic information obtained by integrating the left channel listening environment characteristic information and the right channel listening environment characteristic information. Therefore, it is possible to execute a filter process that more reflects the characteristics of the listening environment.
- the function of the listening environment characteristic information integration unit 190 can be realized by various processors such as a CPU and a DSP operating according to a predetermined program.
- the processor that realizes the function of the listening environment characteristic information integration unit 190 may be the same as the processor that configures the control unit 130 of either the left channel sound adjustment unit 10L or the right channel sound adjustment unit 10R.
- a processor separate from the processor constituting the control unit 130 may be used.
- FIG. 21 is a block diagram illustrating an example of a hardware configuration of the information processing apparatus according to the first and second embodiments.
- the illustrated information processing apparatus 900 is, for example, the apparatus in the case where the above-described acoustic adjustment systems 10, 20, 30, and 70 illustrated in FIGS. 1, 6, 16, and 20 are realized as an integrated apparatus.
- the portable terminal 50 shown in FIG. 18 and the like can be realized.
- the illustrated information processing apparatus 900 includes, for example, the control units 130, 230, and 330 shown in FIGS. 1, 6, 16, 18, and 20, or the listening environment characteristic information integration unit 190 shown in FIG.
- the configuration of an information processing apparatus such as a portable terminal or a server on which the above function is installed can be realized.
- the information processing apparatus 900 includes a CPU 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905. Further, the information processing apparatus 900 includes a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, a communication device 925, and a sensor 935. Good.
- the information processing apparatus 900 may include a processing circuit called DSP or ASIC (Application Specific Integrated Circuit) instead of or in addition to the CPU 901.
- the CPU 901 functions as an arithmetic processing unit and a control unit, and controls all or a part of the operation in the information processing apparatus 900 according to various programs recorded in the ROM 903, the RAM 905, the storage apparatus 919, or the removable recording medium 927.
- the ROM 903 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus.
- the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
- the CPU 901 corresponds to, for example, the control units 130, 230, and 330 illustrated in FIGS. 1, 6, 16, 18, and 20. Further, the CPU 901 can constitute a listening environment characteristic information integration unit 190 shown in FIG.
- the input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
- the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900.
- the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901.
- the input device 915 may be a voice input device such as a microphone.
- the user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
- the input device 915 includes the microphones 110 and 210 in the device. Can correspond to.
- the output device 917 is a device that can notify the user of the acquired information visually or audibly.
- the output device 917 can be, for example, a display device such as an LCD, a PDP (plasma display panel), an organic EL display, a lamp, or an illumination, an audio output device such as a speaker and headphones, and a printer device.
- the output device 917 outputs the result obtained by the processing of the information processing device 900 as a video such as text or an image, or outputs it as a sound or sound.
- the audio output device includes the speaker 120, This corresponds to 220a and 220b.
- the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
- the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the storage device 919 includes various types of information processed by the control units 130, 230, and 330 shown in FIGS. 1, 6, 16, 18, and 20, and the control units 130, 230, Various processing results obtained by 330 can be stored.
- the storage device 919 can store information such as a music signal input from an external device (playback device), acquired listening environment characteristic information, a parameter for correcting the calculated music signal, and the like.
- the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
- the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
- the drive 921 writes a record in the attached removable recording medium 927.
- the drive 921 Corresponds to the playback device in the device.
- the drive 921 reads out and reproduces music content recorded on the removable recording medium 927, and outputs a music signal corresponding to the music content to the control unit 130 shown in FIGS. 1, 6, 16, 18, and 20.
- 230, 330 can be provided. Further, for example, the drive 921 reads various information processed by the control units 130, 230, and 330 and various processing results by the control units 130, 230, and 330 from the removable recording medium 927, or removes the removable recording medium. 927 can be written.
- the connection port 923 is a port for directly connecting a device to the information processing apparatus 900.
- the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
- the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
- various types of data can be exchanged between the information processing apparatus 900 and the external connection device 929. For example, various information processed by the control units 130, 230, and 330 shown in FIGS. 1, 6, 16, 18, and 20 and various types of information by the control units 130, 230, and 330 through the connection port 923. These processing results may be transmitted to and received from the external connection device 929.
- the communication device 925 is a communication interface configured by a communication device for connecting to the communication network 931, for example.
- the communication device 925 can be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth, or WUSB (Wireless USB). Further, the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber line), a modem for various communication, or the like.
- the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
- the communication network 931 connected to the communication device 925 is a network connected by wire or wireless, such as the Internet, home LAN, infrared communication, radio wave communication, satellite communication, or the like.
- the communication device 925 includes various types of information processed by the control units 130, 230, and 330 illustrated in FIGS. 1, 6, 16, 18, and 20 and various types of processing performed by the control units 130, 230, and 330. The results may be transmitted to and received from other external devices via the communication network 931.
- the communication device 925 corresponds to the communication units 170 and 510 illustrated in FIG.
- the sensor 935 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, and a distance measuring sensor.
- the sensor 935 acquires information on the state of the information processing apparatus 900 itself, such as the attitude of the information processing apparatus 900, and information on the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do.
- the sensor 935 may also include a GPS sensor that receives GPS signals and measures the latitude, longitude, and altitude of the device. For example, the sensor 935 corresponds to the position detection unit 520 illustrated in FIG.
- Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
- a computer program for realizing the functions of the information processing apparatus 900 as described above, in particular, the functions of the control units 130, 230, and 330 described above can be produced and mounted on a PC or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- the listening space characteristic information representing the acoustic characteristics of the listening space is acquired based on the external sound.
- the acoustic characteristic of listening space is provided to a music signal based on the acquired listening space characteristic information. Therefore, a more open music that is more familiar to external sounds can be provided to the user. For example, even when the user uses sealed headphones with high sound insulation, it is possible to listen to music with a feeling like BGM while listening to external sound.
- a listening environment characteristic information acquisition unit that acquires listening environment characteristic information indicating characteristics of a listening environment based on external sound collected by at least one microphone, and a filter based on the acquired listening environment characteristic information
- An information processing apparatus comprising: a music signal processing unit that filters music signals according to characteristics.
- the external sound is a user's utterance voice
- the listening environment characteristic information acquisition unit is configured to determine the utterance voice collected by the first microphone via the user's body and the listening environment.
- the information processing apparatus according to (1), wherein the listening environment characteristic information is acquired based on the uttered voice picked up by a second microphone that is different from the first microphone.
- the information processing apparatus wherein the listening environment characteristic information is a transfer function until the uttered voice reaches the second microphone via the listening environment.
- the external sound is a predetermined measurement sound output from a speaker toward the listening environment, and the listening environment characteristic information acquisition unit is based on the measurement sound collected by the microphone.
- the information processing apparatus according to (1), wherein the listening environment characteristic information is acquired.
- the information processing apparatus wherein the listening environment characteristic information is a transfer function until the measurement sound reaches the microphone via the listening environment.
- the listening environment characteristic information is a correlation function between the measurement sound before being output from the speaker and the measurement sound collected by the microphone via the listening environment.
- the information processing apparatus according to (4).
- the information processing apparatus acquires the listening environment characteristic information based on uncorrelated noise collected by the microphone.
- the listening environment characteristic information is an autocorrelation function of the uncorrelated noise.
- the music signal processing unit imparts at least reverberation characteristics of the listening environment to the music signal.
- the music signal processing unit adds a reverberation characteristic of the listening environment to the music signal by convolving the transfer function of the external sound in the listening environment with the music signal using an FIR filter.
- the music signal processing unit filters the music signal using a parameter indicating acoustic characteristics of the listening environment calculated from the listening environment characteristic information.
- the music signal processing unit includes: an IIR filter that reflects a parameter indicating a reverberation characteristic of the listening environment; and an equalizer that reflects a parameter indicating the frequency characteristic of the listening environment.
- the information processing apparatus described in 1. (13) The information according to any one of (1) to (12), wherein the music signal processing unit adjusts a sound pressure ratio between a sound pressure of sound related to the music signal and a sound pressure of external sound. Processing equipment.
- the processor acquires listening environment characteristic information indicating characteristics of the listening environment based on the external sound collected by the at least one microphone, and the processor is based on the acquired listening environment characteristic information. Filtering the music signal with a filter characteristic.
- a function of acquiring listening environment characteristic information indicating characteristics of the listening environment based on external sound collected by at least one microphone in the processor of the computer, and a filter based on the acquired listening environment characteristic information A program for realizing a function of filtering music signals by characteristics.
- Sound adjustment system 50 Portable terminal 60 Listening environment characteristic information DB 100, 200 Headphone 110, 110a, 110b, 210 Microphone 120, 220a, 220b Speaker 130, 230, 330 Control unit 131, 231 Listening environment characteristic information acquisition unit 132, 232, 332 Music signal processing unit 133 Monitor signal generation unit 134 Noise Cancel signal generation unit 170, 510 communication unit 190 listening environment characteristic information integration unit 520 position detection unit
Abstract
Description
1.第1の実施形態
1-1.第1の実施形態の概要
1-2.システムの構成
1-3.聴取環境特性情報取得部について
1-4.音楽信号処理部について
2.第2の実施形態
2-1.第2の実施形態の概要
2-2.システムの構成
2-3.聴取環境特性情報取得部について
2-3-1.音楽信号を測定信号に用いて伝達関数を取得する構成
2-3-2.音楽信号を測定信号に用いて相関関数を取得する構成
2-3-3.無相関雑音を測定信号に用いて相関関数を取得する構成
2-4.音楽信号処理部について
3.情報処理方法
4.変形例
4-1.音圧が調整される変形例
4-2.DBに記憶された聴取環境特性情報を用いる変形例
4-3.一対の筐体のそれぞれにより聴取環境特性情報を取得する変形例
5.ハードウェア構成
6.まとめ
まず、本開示の第1の実施形態について説明する。本開示の第1の実施形態では、ヘッドホンを装着したユーザによる発話に係る音声(以下、発話音声とも呼称する。)が、外部音声としてマイクロフォンによって収音される。そして、収音された発話音声に基づいてユーザが存在する空間(以下、聴取環境とも呼称する。)の音響特性を表す聴取環境特性情報が取得される。更に、取得された聴取環境特性情報に基づくフィルタ特性で音楽コンテンツの音声信号(以下、音楽信号とも呼称する。)がフィルタリングされる。これにより、聴取環境の音響特性が反映された、より外部音声となじんだ音楽がユーザに対して提供されることとなる。
図1を参照して、第1の実施形態に係るヘッドホンの一構成例について説明するとともに、第1の実施形態の概要について説明する。図1は、第1の実施形態に係るヘッドホンの一構成例を示す概略図である。
図2を参照して、第1の実施形態に係る音響調整システムの構成について説明する。図2は、第1の実施形態に係る音響調整システムの一構成例を示すブロック図である。
図3を参照して、図2に示す聴取環境特性情報取得部131の機能について説明する。図3は、聴取環境特性情報取得部131の機能構成の一例を示すブロック図である。
図4を参照して、図2に示す音楽信号処理部132の機能について説明する。図4は、音楽信号処理部132の一構成例を示すブロック図である。
次に、本開示の第2の実施形態について説明する。第2の実施形態では、外部音声として所定の測定用音声が用いられる。当該測定用音声としては、音楽信号に係る音声や、聴取環境における無相関雑音等が用いられ得る。マイクロフォンによって収音された測定用音声に基づいて、聴取環境特性情報が取得される。そして、取得された聴取環境特性情報に基づくフィルタ特性で音楽信号がフィルタリングされる。これにより、聴取環境の音響特性が反映された、より外部音声となじんだ音楽がユーザに対して提供されることとなる。
図5を参照して、第2の実施形態に係るヘッドホンの一構成例について説明するとともに、第2の実施形態の概要について説明する。図5は、第2の実施形態に係るヘッドホンの一構成例を示す概略図である。
図6を参照して、第2の実施形態に係る音響調整システムの構成について説明する。図6は、第2の実施形態に係る音響調整システムの一構成例を示すブロック図である。
第2の実施形態に係る聴取環境特性情報取得部231の機能について説明する。上述したように、聴取環境特性情報取得部231は、マイクロフォン210によって収音される測定用音声に基づいて、聴取環境特性情報として、聴取環境の伝達関数、出力した音楽信号と収音信号との相関関数、及び/又は、無相関雑音の自己相関関数を取得することができる。聴取環境特性情報取得部231は、取得する聴取環境特性情報に応じて、互いに異なる構成を取ることができる。ここでは、下記(2-3-1.音楽信号を測定信号に用いて伝達関数を取得する構成)、(2-3-2.音楽信号を測定信号に用いて相関関数を取得する構成)及び(2-3-3.無相関雑音を測定信号に用いて相関関数を取得する構成)において、取得する聴取環境特性情報に応じた、聴取環境特性情報取得部231の構成について説明する。なお、以下の図7、図8及び図10では、聴取環境特性情報取得部231における互いに異なる構成について説明するために、聴取環境特性情報取得部231に対して便宜的に異なる符号(聴取環境特性情報取得部231a、231b、231c)を付しているが、これらは全て、図6に示す聴取環境特性情報取得部231に対応するものである。
図7を参照して、図2に示す聴取環境特性情報取得部231において、音楽信号を測定信号に用いて伝達関数を取得するための一構成例について説明する。図7は、聴取環境特性情報取得部231において、音楽信号を測定信号に用いて伝達関数を取得するための一構成例を示すブロック図である。
図8を参照して、図2に示す聴取環境特性情報取得部231において、音楽信号を測定信号に用いて相関関数を取得するための一構成例について説明する。図8は、聴取環境特性情報取得部231において、音楽信号を測定信号に用いて相関関数を取得するための一構成例を示すブロック図である。
図10を参照して、図2に示す聴取環境特性情報取得部231において、無相関雑音を測定信号に用いて相関関数を取得するための一構成例について説明する。図10は、聴取環境特性情報取得部231において、無相関雑音を測定信号に用いて相関関数を取得するための一構成例を示すブロック図である。
図11-図13を参照して、図6に示す音楽信号処理部232の機能について説明する。図11は、聴取環境特性情報取得部231によって取得され得る相関関数の一例を示す概略図である。図12は、音楽信号処理部232の機能構成の一例を示すブロック図である。図13は、音楽信号処理部232に含まれる残響成分付与部293の一構成例を示すブロック図である。
初期反射時間は、相関関数の最初のピーク(直接音)と、その後の相関関数のピークとの間の時間T1(例えば、図9に示すt2-t1)として定められ得る。例えば、聴取環境が音楽ホールのような比較的広い室内空間である場合には、初期反射時間はより長くなると考えられる。パラメータ生成部291は、相関関数から当該初期反射時間T1を求め、残響成分付与部293に提供する。残響成分付与部293では、T1に応じて、図13に示すDelay lineの長さや係数ERが変更される。これにより、音楽信号に対して聴取環境の初期反射時間の特性を反映することができる。係数ERとして、相関関数やインパルス応答から求められる値を直接用いてもよいが、係数ERとして適用され得る値を予め数種類用意しておき、その中から、相関関数から得られた特性により近いものを選択して用いることも可能である。
残響時間Trは、得られた相関関数をシュレーダー積分し、エネルギーの減衰カーブを求めることによって推定され得る。シュレーダー積分の一例を下記数式(6)に示す。ここで、<S2(t)>は残響波形の集合平均であり、h(t)は、聴取環境特性情報取得部231によって取得された相関関数やインパルス応答である。
後部残響音の割合の指標としては、例えばD値が挙げられる。D値は、音全体のエネルギーに対する初期(50ms以内)のエネルギーの割合を示す値であり、下記数式(9)によって表される。ここで、h(t)は、聴取環境特性情報取得部231によって取得された相関関数やインパルス応答である。
パラメータ生成部291は、聴取環境特性情報取得部231によって取得された相関関数から、聴取環境の周波数特性を推定し、当該周波数特性を反映し得るようなパラメータを生成し、EQ部292に提供することができる。EQ部292によって、音楽信号に対して聴取環境の周波数特性が反映されることとなる。例えば、推定された聴取環境の周波数特性に高域の減衰が見られる場合には、EQ部292において、音楽信号の高域を減衰させる処理が実行され得る。
次に、図15を参照して、以上説明した第1及び第2の実施形態に係る情報処理方法について説明する。図15は、第1及び第2の実施形態に係る情報処理方法の処理手順の一例を示すフロー図である。なお、ここでは、第1及び第2の実施形態に係る情報処理方法の一例として、上記(2-3-1.音楽信号を測定信号に用いて伝達関数を取得する構成)で説明した聴取環境特性情報取得部231aを有する第2の実施形態に係る音響調整システム20において実行され得る、音楽信号を測定信号として用いて、聴取環境特性情報として伝達関数H2が取得される場合における情報処理方法について説明する。ただし、第1及び第2の実施形態に係る情報処理方法はかかる例に限定されず、上記で第1及び第2の実施形態として説明したように、測定信号としてはユーザの発話音声や無相関雑音が用いられてもよいし、聴取環境特性情報としては相関関数が取得されてもよい。
次に、以上説明した第1及び第2の実施形態についてのいくつかの変形例について説明する。なお、以下では、一例として、上述した第1の実施形態についての変形例について説明するが、以下に説明する変形例に係る構成は、上述した第2の実施形態に対しても同様に適用可能である。
以上説明した第1及び第2の実施形態では、聴取環境の音響特性として、音楽信号に対して聴取環境の残響特性や周波数特性等が付与されていた。ただし、第1及び第2の実施形態はかかる例に限定されず、音楽信号に対して聴取環境の他の音響特性が付与されてもよい。ここでは、一例として、聴取環境に応じて音楽信号の音圧が調整される変形例について説明する。
以上説明した第1及び第2の実施形態では、聴取環境特性情報取得条件が検出される度に、マイクロフォン110、210によって収音された外部音声に基づいて、聴取環境特性情報が取得されていた。ただし、第1及び第2の実施形態はかかる例に限定されず、例えば場所ごと(すなわち聴取環境ごと)の聴取環境特性情報が、当該場所の位置情報と対応付けてデータベース(DB)化されていてもよく、聴取環境特性情報取得部は、当該DBからユーザの現在位置に対応する場所の聴取環境特性情報を取得してもよい。
以上説明した第1及び第2の実施形態では、ヘッドホン100、200を構成する一対の筐体140、240のうち、一方の筐体140、240に設けられるマイクロフォン110、210によって収音された音声に基づいて、聴取環境特性情報が取得されていた。しかしながら、一対の筐体140、240のそれぞれによって、外部音声を収音し、聴取環境特性情報を取得することにより、より高精度に聴取環境特性情報を取得することが可能となる。
次に、図21を参照して、第1及び第2の実施形態に係る情報処理装置のハードウェア構成について説明する。図21は、第1及び第2の実施形態に係る情報処理装置のハードウェア構成の一例を示すブロック図である。図示される情報処理装置900は、例えば、上述した図1、図6、図16及び図20に示す音響調整システム10、20、30、70が一体的な装置として実現される場合における当該装置や、図18に示す携帯端末50等を実現し得る。また、図示される情報処理装置900は、例えば、図1、図6、図16、図18及び図20に示す制御部130、230、330、又は図20に示す聴取環境特性情報統合部190等の機能が搭載される携帯端末やサーバ等の情報処理装置の構成を実現し得る。
以上、本開示の第1及び第2の実施形態、並びに、第1及び第2の実施形態のいくつかの変形例について説明した。以上説明したように、第1及び第2の実施形態によれば、外部音声に基づいて聴取空間の音響特性を表す聴取空間特性情報が取得される。そして、取得された聴取空間特性情報に基づいて音楽信号に聴取空間の音響特性が付与される。従って、より外部の音声になじんだ、より開放感のある音楽がユーザに対して提供され得る。例えば、ユーザが遮音性の高い密閉型のヘッドホンを用いている場合であっても、外部音声を聞きながら、BGMのような感覚で音楽を聴くことが可能となる。
(1)少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得する聴取環境特性情報取得部と、取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングする音楽信号処理部と、を備える、情報処理装置。
(2)前記外部音声は、ユーザの発話音声であり、前記聴取環境特性情報取得部は、前記ユーザの身体を経由して第1のマイクロフォンによって収音される前記発話音声と、前記聴取環境を経由して前記第1のマイクロフォンとは異なる第2のマイクロフォンによって収音される前記発話音声と、に基づいて、前記聴取環境特性情報を取得する、前記(1)に記載の情報処理装置。
(3)前記聴取環境特性情報は、前記発話音声が前記聴取環境を経由して前記第2のマイクロフォンに到達するまでの伝達関数である、前記(2)に記載の情報処理装置。
(4)前記外部音声は、スピーカから前記聴取環境に向かって出力される所定の測定用音声であり、前記聴取環境特性情報取得部は、前記マイクロフォンによって収音された前記測定用音声に基づいて、前記聴取環境特性情報を取得する、前記(1)に記載の情報処理装置。
(5)前記聴取環境特性情報は、前記測定用音声が前記聴取環境を経由して前記マイクロフォンに到達するまでの伝達関数である、前記(4)に記載の情報処理装置。
(6)前記聴取環境特性情報は、前記スピーカから出力される前の前記測定用音声と、前記聴取環境を経由して前記マイクロフォンによって収音された前記測定用音声との相関関数である、前記(4)に記載の情報処理装置。
(7)前記聴取環境特性情報取得部は、前記マイクロフォンによって収音された無相関雑音に基づいて、前記聴取環境特性情報を取得する、前記(1)に記載の情報処理装置。
(8)前記聴取環境特性情報は、前記無相関雑音の自己相関関数である、前記(7)に記載の情報処理装置。
(9)前記音楽信号処理部は、少なくとも前記聴取環境の残響特性を前記音楽信号に付与する、前記(1)~(8)のいずれか1項に記載の情報処理装置。
(10)前記音楽信号処理部は、FIRフィルタを用いて、前記聴取環境における前記外部音声の伝達関数を前記音楽信号に畳み込むことにより、前記聴取環境の残響特性を前記音楽信号に付与する、前記(9)に記載の情報処理装置。
(11)前記音楽信号処理部は、前記聴取環境特性情報から算出される前記聴取環境の音響特性を示すパラメータを用いて、前記音楽信号をフィルタリングする、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(12)前記音楽信号処理部は、前記聴取環境の残響特性を示すパラメータが反映されたIIRフィルタと、前記聴取環境の周波数特性を示すパラメータが反映されたイコライザと、を含む、前記(11)に記載の情報処理装置。
(13)前記音楽信号処理部は、前記音楽信号に係る音声の音圧と外部音声の音圧との音圧比を調整する、前記(1)~(12)のいずれか1項に記載の情報処理装置。
(14)プロセッサが、少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得することと、プロセッサが、取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングすることと、を含む、情報処理方法。
(15)コンピュータのプロセッサに、少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得する機能と、取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングする機能と、を実現させる、プログラム。
50 携帯端末
60 聴取環境特性情報DB
100、200 ヘッドホン
110、110a、110b、210 マイクロフォン
120、220a、220b スピーカ
130、230、330 制御部
131、231 聴取環境特性情報取得部
132、232、332 音楽信号処理部
133 モニタ信号生成部
134 ノイズキャンセル信号生成部
170、510 通信部
190 聴取環境特性情報統合部
520 位置検出部
Claims (15)
- 少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得する聴取環境特性情報取得部と、
取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングする音楽信号処理部と、
を備える、情報処理装置。 - 前記外部音声は、ユーザの発話音声であり、
前記聴取環境特性情報取得部は、前記ユーザの身体を経由して第1のマイクロフォンによって収音される前記発話音声と、前記聴取環境を経由して前記第1のマイクロフォンとは異なる第2のマイクロフォンによって収音される前記発話音声と、に基づいて、前記聴取環境特性情報を取得する、
請求項1に記載の情報処理装置。 - 前記聴取環境特性情報は、前記発話音声が前記聴取環境を経由して前記第2のマイクロフォンに到達するまでの伝達関数である、
請求項2に記載の情報処理装置。 - 前記外部音声は、スピーカから前記聴取環境に向かって出力される所定の測定用音声であり、
前記聴取環境特性情報取得部は、前記マイクロフォンによって収音された前記測定用音声に基づいて、前記聴取環境特性情報を取得する、
請求項1に記載の情報処理装置。 - 前記聴取環境特性情報は、前記測定用音声が前記聴取環境を経由して前記マイクロフォンに到達するまでの伝達関数である、
請求項4に記載の情報処理装置。 - 前記聴取環境特性情報は、前記スピーカから出力される前の前記測定用音声と、前記聴取環境を経由して前記マイクロフォンによって収音された前記測定用音声との相関関数である、
請求項4に記載の情報処理装置。 - 前記聴取環境特性情報取得部は、前記マイクロフォンによって収音された無相関雑音に基づいて、前記聴取環境特性情報を取得する、
請求項1に記載の情報処理装置。 - 前記聴取環境特性情報は、前記無相関雑音の自己相関関数である、
請求項7に記載の情報処理装置。 - 前記音楽信号処理部は、少なくとも前記聴取環境の残響特性を前記音楽信号に付与する、
請求項1に記載の情報処理装置。 - 前記音楽信号処理部は、FIRフィルタを用いて、前記聴取環境における前記外部音声の伝達関数を前記音楽信号に畳み込むことにより、前記聴取環境の残響特性を前記音楽信号に付与する、
請求項9に記載の情報処理装置。 - 前記音楽信号処理部は、前記聴取環境特性情報から算出される前記聴取環境の音響特性を示すパラメータを用いて、前記音楽信号をフィルタリングする、
請求項9に記載の情報処理装置。 - 前記音楽信号処理部は、前記聴取環境の残響特性を示すパラメータが反映されたIIRフィルタと、前記聴取環境の周波数特性を示すパラメータが反映されたイコライザと、を含む、
請求項11に記載の情報処理装置。 - 前記音楽信号処理部は、前記音楽信号に係る音声の音圧と外部音声の音圧との音圧比を調整する、
請求項1に記載の情報処理装置。 - プロセッサが、少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得することと、
プロセッサが、取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングすることと、
を含む、情報処理方法。 - コンピュータのプロセッサに、
少なくとも1つのマイクロフォンによって収音された外部音声に基づいて、聴取環境の特性を示す聴取環境特性情報を取得する機能と、
取得された前記聴取環境特性情報に基づくフィルタ特性で音楽信号をフィルタリングする機能と、
を実現させる、プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016531177A JP6572894B2 (ja) | 2014-06-30 | 2015-05-14 | 情報処理装置、情報処理方法及びプログラム |
CN201580034441.6A CN106664473B (zh) | 2014-06-30 | 2015-05-14 | 信息处理装置、信息处理方法和程序 |
US15/321,408 US9892721B2 (en) | 2014-06-30 | 2015-05-14 | Information-processing device, information processing method, and program |
EP15815725.5A EP3163902A4 (en) | 2014-06-30 | 2015-05-14 | Information-processing device, information processing method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-134909 | 2014-06-30 | ||
JP2014134909 | 2014-06-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016002358A1 true WO2016002358A1 (ja) | 2016-01-07 |
Family
ID=55018913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/063919 WO2016002358A1 (ja) | 2014-06-30 | 2015-05-14 | 情報処理装置、情報処理方法及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9892721B2 (ja) |
EP (1) | EP3163902A4 (ja) |
JP (1) | JP6572894B2 (ja) |
CN (1) | CN106664473B (ja) |
WO (1) | WO2016002358A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018084594A (ja) * | 2016-11-21 | 2018-05-31 | 日本電信電話株式会社 | 特徴量抽出装置、音響モデル学習装置、音響モデル選択装置、特徴量抽出方法、およびプログラム |
CN108605193A (zh) * | 2016-02-01 | 2018-09-28 | 索尼公司 | 声音输出设备、声音输出方法、程序和声音系统 |
WO2019053995A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | イヤホン装置、ヘッドホン装置及び方法 |
WO2019082389A1 (ja) * | 2017-10-27 | 2019-05-02 | ヤマハ株式会社 | 音信号出力装置及びプログラム |
WO2020208926A1 (ja) * | 2019-04-08 | 2020-10-15 | ソニー株式会社 | 信号処理装置、信号処理方法及びプログラム |
JP2020534574A (ja) * | 2017-09-20 | 2020-11-26 | ボーズ・コーポレーションBose Corporation | 音響デバイスの並列能動騒音低減(anr)及びヒアスルー信号伝達経路 |
WO2023074654A1 (ja) * | 2021-10-27 | 2023-05-04 | パイオニア株式会社 | 情報処理装置、情報処理方法、プログラムおよび記録媒体 |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9728179B2 (en) | 2015-10-16 | 2017-08-08 | Avnera Corporation | Calibration and stabilization of an active noise cancelation system |
CN107533839B (zh) * | 2015-12-17 | 2021-02-23 | 华为技术有限公司 | 一种对周围环境音的处理方法及设备 |
US10200800B2 (en) * | 2017-02-06 | 2019-02-05 | EVA Automation, Inc. | Acoustic characterization of an unknown microphone |
US10483931B2 (en) * | 2017-03-23 | 2019-11-19 | Yamaha Corporation | Audio device, speaker device, and audio signal processing method |
US10157627B1 (en) * | 2017-06-02 | 2018-12-18 | Bose Corporation | Dynamic spectral filtering |
JP7031668B2 (ja) * | 2017-06-28 | 2022-03-08 | ソニーグループ株式会社 | 情報処理装置、情報処理システム、情報処理方法及びプログラム |
CN109429147B (zh) * | 2017-08-30 | 2021-01-05 | 美商富迪科技股份有限公司 | 电子装置与电子装置的控制方法 |
US11087776B2 (en) * | 2017-10-30 | 2021-08-10 | Bose Corporation | Compressive hear-through in personal acoustic devices |
US10235987B1 (en) * | 2018-02-23 | 2019-03-19 | GM Global Technology Operations LLC | Method and apparatus that cancel component noise using feedforward information |
CN108989931B (zh) * | 2018-06-19 | 2020-10-09 | 美特科技(苏州)有限公司 | 听力保护耳机及其听力保护方法、计算机可读存储介质 |
JP7119210B2 (ja) * | 2018-08-02 | 2022-08-16 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 能動ノイズ制御システムの自動較正 |
CN108985277B (zh) * | 2018-08-24 | 2020-11-10 | 广东石油化工学院 | 一种功率信号中背景噪声滤除方法及系统 |
KR20210030708A (ko) * | 2019-09-10 | 2021-03-18 | 엘지전자 주식회사 | 복수의 사용자의 음성 신호 추출 방법과, 이를 구현하는 단말 장치 및 로봇 |
US10834494B1 (en) | 2019-12-13 | 2020-11-10 | Bestechnic (Shanghai) Co., Ltd. | Active noise control headphones |
US11485231B2 (en) * | 2019-12-27 | 2022-11-01 | Harman International Industries, Incorporated | Systems and methods for providing nature sounds |
JP2021131434A (ja) * | 2020-02-19 | 2021-09-09 | ヤマハ株式会社 | 音信号処理方法および音信号処理装置 |
CN114286278B (zh) * | 2021-12-27 | 2024-03-15 | 北京百度网讯科技有限公司 | 音频数据处理方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001224100A (ja) * | 2000-02-14 | 2001-08-17 | Pioneer Electronic Corp | 自動音場補正システム及び音場補正方法 |
JP2008282042A (ja) * | 2008-07-14 | 2008-11-20 | Sony Corp | 再生装置 |
JP2011530218A (ja) * | 2008-07-29 | 2011-12-15 | ドルビー・ラボラトリーズ・ライセンシング・コーポレーション | 電子音響チャンネルの適応制御とイコライゼーションの方法 |
JP2013541275A (ja) * | 2010-09-08 | 2013-11-07 | ディーティーエス・インコーポレイテッド | 拡散音の空間的オーディオの符号化及び再生 |
JP2014505420A (ja) * | 2011-01-05 | 2014-02-27 | コーニンクレッカ フィリップス エヌ ヴェ | オーディオ・システムおよびその動作方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8081780B2 (en) * | 2007-05-04 | 2011-12-20 | Personics Holdings Inc. | Method and device for acoustic management control of multiple microphones |
US20090074216A1 (en) * | 2007-09-13 | 2009-03-19 | Bionica Corporation | Assistive listening system with programmable hearing aid and wireless handheld programmable digital signal processing device |
CN101828410B (zh) * | 2007-10-16 | 2013-11-06 | 峰力公司 | 用于无线听力辅助的方法和系统 |
US8155340B2 (en) * | 2008-07-24 | 2012-04-10 | Qualcomm Incorporated | Method and apparatus for rendering ambient signals |
US8571226B2 (en) * | 2010-12-10 | 2013-10-29 | Sony Corporation | Automatic polarity adaptation for ambient noise cancellation |
CN102158778A (zh) * | 2011-03-11 | 2011-08-17 | 青岛海信移动通信技术股份有限公司 | 一种降低耳机噪声的方法、设备和系统 |
JP2013102411A (ja) * | 2011-10-14 | 2013-05-23 | Sony Corp | 音声信号処理装置、および音声信号処理方法、並びにプログラム |
US20140126733A1 (en) * | 2012-11-02 | 2014-05-08 | Daniel M. Gauger, Jr. | User Interface for ANR Headphones with Active Hear-Through |
-
2015
- 2015-05-14 US US15/321,408 patent/US9892721B2/en active Active
- 2015-05-14 EP EP15815725.5A patent/EP3163902A4/en not_active Withdrawn
- 2015-05-14 JP JP2016531177A patent/JP6572894B2/ja active Active
- 2015-05-14 WO PCT/JP2015/063919 patent/WO2016002358A1/ja active Application Filing
- 2015-05-14 CN CN201580034441.6A patent/CN106664473B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001224100A (ja) * | 2000-02-14 | 2001-08-17 | Pioneer Electronic Corp | 自動音場補正システム及び音場補正方法 |
JP2008282042A (ja) * | 2008-07-14 | 2008-11-20 | Sony Corp | 再生装置 |
JP2011530218A (ja) * | 2008-07-29 | 2011-12-15 | ドルビー・ラボラトリーズ・ライセンシング・コーポレーション | 電子音響チャンネルの適応制御とイコライゼーションの方法 |
JP2013541275A (ja) * | 2010-09-08 | 2013-11-07 | ディーティーエス・インコーポレイテッド | 拡散音の空間的オーディオの符号化及び再生 |
JP2014505420A (ja) * | 2011-01-05 | 2014-02-27 | コーニンクレッカ フィリップス エヌ ヴェ | オーディオ・システムおよびその動作方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3163902A4 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11037544B2 (en) | 2016-02-01 | 2021-06-15 | Sony Corporation | Sound output device, sound output method, and sound output system |
CN108605193A (zh) * | 2016-02-01 | 2018-09-28 | 索尼公司 | 声音输出设备、声音输出方法、程序和声音系统 |
EP3413590A4 (en) * | 2016-02-01 | 2018-12-19 | Sony Corporation | Audio output device, audio output method, program, and audio system |
US10685641B2 (en) | 2016-02-01 | 2020-06-16 | Sony Corporation | Sound output device, sound output method, and sound output system for sound reverberation |
JP2018084594A (ja) * | 2016-11-21 | 2018-05-31 | 日本電信電話株式会社 | 特徴量抽出装置、音響モデル学習装置、音響モデル選択装置、特徴量抽出方法、およびプログラム |
WO2019053995A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | イヤホン装置、ヘッドホン装置及び方法 |
JP2019054337A (ja) * | 2017-09-13 | 2019-04-04 | ソニー株式会社 | イヤホン装置、ヘッドホン装置及び方法 |
US11741938B2 (en) | 2017-09-13 | 2023-08-29 | Sony Group Corporation | Earphone device, headphone device, and method |
JP2020534574A (ja) * | 2017-09-20 | 2020-11-26 | ボーズ・コーポレーションBose Corporation | 音響デバイスの並列能動騒音低減(anr)及びヒアスルー信号伝達経路 |
JP7008806B2 (ja) | 2017-09-20 | 2022-01-25 | ボーズ・コーポレーション | 音響デバイスの並列能動騒音低減(anr)及びヒアスルー信号伝達経路 |
WO2019082389A1 (ja) * | 2017-10-27 | 2019-05-02 | ヤマハ株式会社 | 音信号出力装置及びプログラム |
WO2020208926A1 (ja) * | 2019-04-08 | 2020-10-15 | ソニー株式会社 | 信号処理装置、信号処理方法及びプログラム |
WO2023074654A1 (ja) * | 2021-10-27 | 2023-05-04 | パイオニア株式会社 | 情報処理装置、情報処理方法、プログラムおよび記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
CN106664473A (zh) | 2017-05-10 |
EP3163902A1 (en) | 2017-05-03 |
CN106664473B (zh) | 2020-02-14 |
US20170200442A1 (en) | 2017-07-13 |
EP3163902A4 (en) | 2018-02-28 |
US9892721B2 (en) | 2018-02-13 |
JP6572894B2 (ja) | 2019-09-11 |
JPWO2016002358A1 (ja) | 2017-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6572894B2 (ja) | 情報処理装置、情報処理方法及びプログラム | |
US10332502B2 (en) | Noise reducing device, noise reducing method, noise reducing program, and noise reducing audio outputting device | |
EP2202998B1 (en) | A device for and a method of processing audio data | |
US8675884B2 (en) | Method and a system for processing signals | |
CN104918177B (zh) | 信号处理装置、信号处理方法和程序 | |
JP3670562B2 (ja) | ステレオ音響信号処理方法及び装置並びにステレオ音響信号処理プログラムを記録した記録媒体 | |
US9544698B2 (en) | Signal enhancement using wireless streaming | |
JP2013527491A (ja) | オーディオ再生のための適応的環境ノイズ補償 | |
CN108235181B (zh) | 在音频处理装置中降噪的方法 | |
JP2009530950A (ja) | ウェアラブル装置のためのデータ処理 | |
WO2016153825A1 (en) | System and method for improved audio perception | |
US9137619B2 (en) | Audio signal correction and calibration for a room environment | |
US20140161281A1 (en) | Audio signal correction and calibration for a room environment | |
CN112956210B (zh) | 基于均衡滤波器的音频信号处理方法及装置 | |
WO2022048334A1 (zh) | 检测方法、装置、耳机和可读存储介质 | |
WO2022256577A1 (en) | A method of speech enhancement and a mobile computing device implementing the method | |
CN113424558A (zh) | 智能个人助理 | |
WO2018234618A1 (en) | AUDIO SIGNAL PROCESSING | |
TW201506913A (zh) | 麥克風系統及其聲音處理方法 | |
KR101602298B1 (ko) | 음량측정기를 이용한 오디오시스템 | |
JP5880753B2 (ja) | ヘッドホン、ヘッドホンのノイズ低減方法、ノイズ低減処理用プログラム | |
JP5224613B2 (ja) | 音場補正システム及び音場補正方法 | |
JP2012095254A (ja) | 音量調整装置、音量調整方法及び音量調整プログラム並びに音響機器 | |
JP2012100117A (ja) | 音響処理装置及び方法 | |
CN116419111A (zh) | 耳机的控制方法、参数生成方法、装置、存储介质及耳机 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15815725 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016531177 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015815725 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015815725 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15321408 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |