WO2014073704A1 - 信号処理システムおよび信号処理方法 - Google Patents
信号処理システムおよび信号処理方法 Download PDFInfo
- Publication number
- WO2014073704A1 WO2014073704A1 PCT/JP2013/080587 JP2013080587W WO2014073704A1 WO 2014073704 A1 WO2014073704 A1 WO 2014073704A1 JP 2013080587 W JP2013080587 W JP 2013080587W WO 2014073704 A1 WO2014073704 A1 WO 2014073704A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- microphone
- host device
- signal
- program
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/02—Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2410/00—Microphones
- H04R2410/01—Noise reduction using microphones having different directional characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2410/00—Microphones
- H04R2410/05—Noise reduction with a separate noise microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
Definitions
- the present invention relates to a signal processing system including a microphone unit and a host device connected to the microphone unit.
- an apparatus for storing a plurality of programs has been proposed so that an echo canceling program can be selected according to a communication destination.
- the device of Patent Document 1 is configured to change the tap length according to the communication destination.
- the videophone device of Patent Document 2 reads out a different program for each application by switching a dip switch provided in the main body.
- an object of the present invention is to provide a signal processing system that does not need to store a plurality of programs in advance.
- the signal processing system of the present invention is a signal processing system including a microphone unit and a host device connected to one of the microphone units.
- the microphone unit includes a microphone that collects sound, a temporary storage memory, and a processing unit that processes sound collected by the microphone.
- the host device includes a non-volatile memory that holds an audio processing program for the microphone unit.
- the host device transmits the audio processing program from the nonvolatile memory to the temporary storage memory of the microphone unit, and the microphone unit stores the audio processing program in the temporary storage memory.
- the processing unit performs processing according to a voice processing program temporarily stored in the temporary storage memory, and transmits the processed voice to the host device.
- the terminal does not have a built-in operation program in advance, and receives the program from the host device and temporarily stores it in the temporary storage memory before performing the operation. Therefore, it is not necessary to store a large number of programs in advance on the microphone unit side.
- the program rewriting process for each microphone unit is unnecessary, and a new function can be realized only by changing the program stored in the nonvolatile memory on the host device side. it can.
- the same program may be executed by all the microphone units, but it is also possible to execute individual programs for each microphone unit.
- the host device can change the program to be transmitted according to the number of connected microphone units.
- the gain of the microphone unit is set high.
- the gain of each microphone unit is set relatively low.
- each microphone unit includes a plurality of microphones, it is possible to execute a program for causing a microphone array to function.
- the host device divides the audio processing program into constant unit bit data, creates serial data in which the unit bit data is arranged in the order received by each microphone unit, and transmits the serial data to each microphone unit.
- Each microphone unit extracts and receives unit bit data that it should receive from the serial data, temporarily stores the extracted unit bit data, and the processing unit responds to an audio processing program that combines the unit bit data.
- a mode of performing processing is also possible. Thereby, even if the number of microphone units increases and the number of programs to be transmitted increases, the number of signal lines between the microphone units does not increase.
- each microphone unit divides the processed sound into fixed unit bit data and transmits it to a microphone unit connected to a higher level, and each microphone unit cooperates to create serial data for transmission, and A mode of transmission to the apparatus is also possible. Thereby, even if the number of microphone units increases and the number of channels increases, the number of signal lines between the microphone units does not increase.
- the microphone unit includes a plurality of microphones having different sound collection directions and a sound level determination unit.
- the host device includes a speaker, and emits a test sound wave from the speaker toward each microphone unit.
- Each microphone unit determines the level of the test sound wave input to the plurality of microphones, divides the level data as a determination result into fixed unit bit data, and transmits the unit bit data to a higher-level microphone unit. It is also possible to adopt a mode in which the microphone units cooperate to create level determination serial data. Thereby, the host device can grasp the level of echo from the speaker to the microphone of each microphone unit.
- the speech processing program includes an echo cancellation program for realizing an echo canceller in which filter coefficients are updated.
- the echo cancellation program includes a filter coefficient setting unit that determines the number of filter coefficients. Based on the level data received from each microphone unit, the number of filter coefficients of each microphone unit is changed, a change parameter for changing the number of filter coefficients is determined for each microphone unit, and the change parameter is set to a constant unit bit. It is also possible to divide into data, create change parameter serial data in which the unit bit data is arranged in the order received by each microphone unit, and transmit the change parameter serial data to each microphone unit. .
- the number of filter coefficients (number of taps) is increased for a microphone unit that is close to the host device and has a high echo level, or the number of taps is set for a microphone unit that is far from the host device and has a low echo level. Can be shortened.
- the audio processing program is the echo cancellation program or a noise cancellation program that removes noise components, and the host device transmits a program to be transmitted from the level data to each microphone unit of the echo cancellation program or the noise cancellation program. It is also possible to adopt a mode defined in any one of them.
- an echo canceller can be executed for a microphone unit that is close to the host device and has a high echo level
- a noise canceller can be executed for a microphone unit that is far from the host device and has a low echo level.
- the signal processing method of the present invention is a signal processing method for a signal processing device including a plurality of microphone units connected in series and a host device connected to one of the plurality of microphone units.
- Each microphone unit includes a microphone that collects sound, a temporary storage memory, and a processing unit that processes sound collected by the microphone.
- the host device includes a non-volatile memory that holds an audio processing program for the microphone unit.
- the audio processing program when the activation state of the host device is detected, the audio processing program is read from the nonvolatile memory, the audio processing program is transmitted from the host device to the microphone units, and the audio processing program is transmitted. Temporarily storing in the temporary storage memory of each microphone unit, performing processing according to the audio processing program temporarily stored in the temporary storage memory, and transmitting the processed audio to the host device.
- the present invention it is not necessary to store a plurality of programs in advance, and it is not necessary to rewrite the terminal program when adding a new function.
- FIG. 2A is a block diagram showing the configuration of the host device
- FIG. 2B is a block diagram showing the configuration of the microphone unit
- 3A is a diagram illustrating a configuration of an echo canceller
- FIG. 3B is a diagram illustrating a configuration of a noise canceller.
- 5A is a diagram showing another connection mode of the signal processing system of the present invention
- FIG. 5B is an external perspective view of the host device
- FIG. 5C is a diagram of the microphone unit. It is an external perspective view.
- FIG. 6A is a schematic block diagram showing signal connection
- FIG. 6A is a schematic block diagram showing signal connection
- FIG. 6B is a schematic block diagram showing the configuration of the microphone unit. It is the schematic block diagram which showed the structure of the signal processing apparatus in the case of converting serial data and parallel data.
- FIG. 8A is a conceptual diagram showing conversion between serial data and parallel data
- FIG. 8B is a diagram showing a signal flow of the microphone unit. It is a figure which shows the flow of a signal in the case of transmitting a signal from each microphone unit to a host device. It is a figure which shows the flow of a signal in the case of transmitting a separate audio
- FIG. 23A and FIG. 23B are diagrams showing a modified example of the arrangement of the host device and the slave unit.
- FIG. 1 is a diagram showing a connection mode of the signal processing system of the present invention.
- the signal processing system includes a host device 1 and a plurality (five in this example) of microphone units 2A to 2E connected to the host device 1, respectively.
- the microphone units 2A to 2E are arranged in a large conference room, for example.
- the host device 1 receives an audio signal from each microphone unit and performs various processes. For example, the audio signal of each microphone unit is individually transmitted to another host device connected via the network.
- FIG. 2A is a block diagram showing the configuration of the host device 1
- FIG. 2B is a block diagram showing the configuration of the microphone unit 2A.
- the hardware configuration of each microphone unit is the same.
- FIG. 2B the configuration and function of the microphone unit 2A will be described as a representative.
- the configuration of A / D conversion is omitted, and the various signals are assumed to be digital signals unless otherwise specified.
- the host apparatus 1 includes a communication interface (I / F) 11, a CPU 12, a RAM 13, a nonvolatile memory 14, and a speaker 102.
- the CPU 12 performs various operations by reading the application program from the nonvolatile memory 14 and temporarily storing it in the RAM 13. For example, as described above, an audio signal is input from each microphone unit, and each audio signal is individually transmitted to another host device connected via a network.
- the non-volatile memory 14 includes a flash memory, a hard disk drive (HDD), and the like.
- the nonvolatile memory 14 stores a sound processing program (hereinafter referred to as a sound signal processing program in the present embodiment).
- the audio signal processing program is an operation program for each microphone unit.
- programs such as a program for realizing an echo canceller function, a program for realizing a noise canceller function, and a program for realizing gain control.
- the CPU 12 reads a predetermined audio signal processing program from the nonvolatile memory 14 and transmits it to each microphone unit via the communication I / F 11. Note that the audio signal processing program may be incorporated in the application program.
- the microphone unit 2A includes a communication I / F 21A, a DSP 22A, and a microphone (hereinafter also referred to as a microphone) 25A.
- the DSP 22A includes a volatile memory 23A and an audio signal processing unit 24A.
- the volatile memory 23A is incorporated in the DSP 22A.
- the volatile memory 23A may be provided separately from the DSP 22A.
- the audio signal processing unit 24A corresponds to the processing unit of the present invention, and has a function of outputting the sound collected by the microphone 25A as a digital audio signal.
- the audio signal processing program transmitted from the host device 1 is temporarily stored in the volatile memory 23A via the communication I / F 21A.
- the audio signal processing unit 24A performs processing according to the audio signal processing program temporarily stored in the volatile memory 23A, and transmits a digital audio signal related to the audio collected by the microphone 25A to the host device 1. For example, when an echo canceller program is transmitted from the host device 1, the echo component is removed from the sound collected by the microphone 25 ⁇ / b> A and then transmitted to the host device 1. As described above, when the echo canceller program is executed in each microphone unit, it is preferable when the host device 1 executes an application program for communication conference.
- the audio signal processing program temporarily stored in the volatile memory 23A is deleted when the power supply to the microphone unit 2A is cut off.
- the microphone unit always operates after receiving an audio signal processing program for operation from the host device 1 every time it is activated. If the microphone unit 2A is supplied with power (via bus power) via the communication I / F 21A, it receives an operation program from the host device 1 only when connected to the host device 1, Will perform the action.
- the audio signal processing program for echo canceller is executed, and when the application program for recording is executed, the audio signal processing of noise canceller is executed.
- the program is executed.
- a loudspeaker application program in order to output the sound collected by each microphone unit from the speaker 102 of the host device 1, an aspect in which a howling canceler audio signal processing program is executed. Is also possible. Note that the speaker 102 is not necessary when the host device 1 executes an application program for recording.
- FIG. 3A is a block diagram showing a configuration when the audio signal processing unit 24A executes an echo canceller program.
- the audio signal processing unit 24A includes a filter coefficient setting unit 241, an adaptive filter 242, and an adding unit 243.
- the filter coefficient setting unit 241 estimates the transfer function of the acoustic transmission system (acoustic propagation path from the speaker 102 of the host device 1 to the microphone of each microphone unit), and sets the filter coefficient of the adaptive filter 242 using the estimated transfer function. .
- the adaptive filter 242 includes a digital filter such as an FIR filter.
- the adaptive filter 242 receives the sound emission signal FE input from the host device 1 to the speaker 102 of the host device 1, performs filter processing with the filter coefficient set in the filter coefficient setting unit 241, and performs pseudo-regression sound signal Is generated.
- the adaptive filter 242 outputs the generated pseudo regression sound signal to the adding unit 243.
- the adding unit 243 outputs a sound collection signal NE1 'obtained by subtracting the pseudo regression sound signal input from the adaptive filter 242 from the sound collection signal NE1 of the microphone 25A.
- the filter coefficient setting unit 241 updates the filter coefficient using an adaptive algorithm such as an LMS algorithm based on the sound collection signal NE1 'and the sound emission signal FE output from the addition unit 243. Then, the filter coefficient setting unit 241 sets the updated filter coefficient in the adaptive filter 242.
- an adaptive algorithm such as an LMS algorithm based on the sound collection signal NE1 'and the sound emission signal FE output from the addition unit 243. Then, the filter coefficient setting unit 241 sets the updated filter coefficient in the adaptive filter 242.
- FIG. 3B is a block diagram showing a configuration when the audio signal processing unit 24A executes a noise canceller program.
- the audio signal processing unit 24A includes an FFT processing unit 245, a noise removal unit 246, an estimation unit 247, and an IFFT processing unit 248.
- the FFT processing unit 245 converts the collected sound signal NE'T into the frequency spectrum NE'N.
- the noise removing unit 246 removes the noise component N′N included in the frequency spectrum NE′N.
- the noise component N′N is estimated by the estimation unit 247 based on the frequency spectrum NE′N.
- the estimation unit 247 performs a process of estimating a noise component N′N included in the frequency spectrum NE′N input from the FFT processing unit 245.
- the estimation unit 247 sequentially acquires and temporarily stores a frequency spectrum (hereinafter referred to as a speech spectrum) S (NE′N) at a certain sample timing of the speech signal NE′N.
- a speech spectrum hereinafter referred to as a speech spectrum
- the estimation unit 247 Based on the acquired and stored multiple times of the speech spectrum S (NE′N), the estimation unit 247 has a frequency spectrum at a certain sample timing of the noise component N′N (hereinafter referred to as a noise spectrum) S (N 'N) is estimated. Then, the estimation unit 247 outputs the estimated noise spectrum S (N′N) to the noise removal unit 246.
- the noise spectrum S (N′N (T)) can be expressed by the following formula 1.
- noise components such as background noise can be estimated by estimating the noise spectrum S (N′N (T)) based on the speech spectrum.
- the estimation unit 247 performs noise spectrum estimation processing only when the level of the collected sound signal collected by the microphone 25A is low (silent state).
- the noise removing unit 246 removes the noise component N′N from the frequency spectrum NE′N input from the FFT processing unit 245 and outputs the frequency spectrum CO′N after the noise removal to the IFFT processing unit 248. Specifically, the noise removal unit 246 calculates a signal level ratio between the voice spectrum S (NE′N) and the noise spectrum S (N′N) input from the estimation unit 247. When the calculated signal level ratio is equal to or greater than the threshold, the noise removal unit 246 outputs the speech spectrum S (NE′N) linearly. Further, when the calculated signal level ratio is less than the threshold value, the noise removing unit 246 outputs the speech spectrum S (NE′N) nonlinearly.
- the IFFT processing unit 248 outputs a sound signal CO′T generated by inversely transforming the frequency spectrum CO′N after removing the noise component N′N to the time axis.
- the audio signal processing program can realize an echo suppressor program as shown in FIG.
- the echo suppressor removes an echo component that could not be removed by the echo canceller in the subsequent stage of the echo canceller shown in FIG.
- the echo suppressor includes an FFT processing unit 121, an echo removal unit 122, an FFT processing unit 123, a progress calculation unit 124, an echo generation unit 125, an FFT processing unit 126, and an IFFT processing unit 127.
- the FFT processing unit 121 converts the collected sound signal NE1 'output from the echo canceller into a frequency spectrum. This frequency spectrum is output to the echo removing unit 122 and the progress degree calculating unit 124.
- the echo removing unit 122 removes residual echo components (echo components that could not be removed by the echo canceller) included in the input frequency spectrum. The residual echo component is generated by the echo generator 125.
- the echo generation unit 125 generates a residual echo component based on the frequency spectrum of the pseudo regression sound signal input from the FFT processing unit 126.
- the residual echo component is obtained by adding the residual echo component estimated in the past and the frequency spectrum of the input pseudo-regression sound signal multiplied by a predetermined coefficient.
- the predetermined coefficient is set by the progress calculation unit 124.
- the progress calculation unit 124 includes a sound collection signal NE1 input from the FFT processing unit 123 (a sound collection signal before the echo component is removed by the previous echo canceller) and a sound collection signal input from the FFT processing unit 121.
- the power ratio with NE1 ′ (the collected sound signal after the echo component is removed by the preceding echo canceller) is obtained.
- the progress degree calculation unit 124 outputs a predetermined coefficient based on the power ratio. For example, when learning of the adaptive filter 242 is not performed at all, the predetermined coefficient is set to 1, and when learning of the adaptive filter 242 progresses, the predetermined coefficient is set to 0 and learning of the adaptive filter 242 is performed. The predetermined coefficient is reduced as the progress proceeds, and the residual echo component is reduced. Then, the echo removal unit 122 removes the residual echo component calculated by the echo generation unit 125.
- the IFFT processing unit 127 performs inverse conversion on the time axis and outputs the frequency spectrum after removing the echo component.
- the echo canceller program, the noise canceller program, and the echo suppressor program can be executed by the host device 1.
- the host device can execute the echo suppressor program while each microphone unit executes the echo canceller program.
- the audio signal processing program to be executed can be changed according to the number of connected microphone units. For example, when the number of connected microphone units is one, the gain of the microphone unit is set high, and when the number of microphone units is plural, the gain of each microphone unit is set relatively low.
- each microphone unit includes a plurality of microphones
- different parameters gain, delay amount, etc.
- different parameters can be set for each microphone unit according to the order (position) connected to the host device 1.
- the microphone unit of the present embodiment can realize various functions according to the use of the host device 1. Even in the case of realizing such various functions, the microphone unit 2A does not need to store a program in advance and does not require a non-volatile memory (or a small capacity).
- the volatile memory 23A that is a RAM is shown as an example of the temporary storage memory.
- the volatile memory 23A is volatile.
- a non-volatile memory such as a flash memory may be used.
- the DSP 22A erases the contents of the flash memory.
- a capacitor or the like is provided for temporarily securing power until the DSP 22A erases the contents of the flash memory when the power supply to the microphone unit 2A is cut off.
- the microphone units 2A to 2E all have the same hardware, the user does not need to be aware of which microphone unit is connected to which position.
- a microphone unit for example, the microphone unit 2A
- a microphone unit for example, the microphone unit 2E
- the echo canceller program is always executed on the microphone unit 2E closest to the host device 1
- the noise canceller program is executed on the microphone unit 2A farthest from the host device 1.
- each microphone unit may be in a star connection mode that is directly connected to the host device 1, but each microphone unit is connected to each other as shown in FIG. May be connected in series, and any one of the microphone units (microphone unit 2A) may be connected in a cascade manner to the host device 1.
- the host device 1 is connected to the microphone unit 2A via the cable 331.
- the microphone unit 2A and the microphone unit 2B are connected via a cable 341.
- the microphone unit 2B and the microphone unit 2C are connected via a cable 351.
- the microphone unit 2C and the microphone unit 2D are connected via a cable 361.
- the microphone unit 2D and the microphone unit 2E are connected via a cable 371.
- FIG. 5B is an external perspective view of the host device 1
- FIG. 5C is an external perspective view of the microphone unit 2A.
- the microphone unit 2A is illustrated and described as a representative, but all the microphone units have the same appearance and configuration.
- the host device 1 has a rectangular parallelepiped housing 101A, a speaker 102 is provided on the side surface (front surface) of the housing 101A, and communication is performed on the side surface (rear surface) of the housing 101A.
- An I / F 11 is provided.
- the microphone unit 2A has a rectangular parallelepiped housing 201A, a microphone 25A is provided on the side surface of the housing 201A, and a first input / output terminal 33A and a second input / output terminal 34A are provided on the front surface of the housing 201A.
- FIG. 5C shows an example in which the microphone 25A has three sound collection directions on the back surface, the right side surface, and the left side surface.
- the sound collection direction is not limited to this example.
- the three microphones 25 ⁇ / b> A may be arranged in a 120-degree interval in a plan view and collected in the circumferential direction.
- the microphone unit 2 ⁇ / b> A has a cable 331 connected to the first input / output terminal 33 ⁇ / b> A, and is connected to the communication I / F 11 of the host device 1 via the cable 331.
- the microphone unit 2A is connected to the second input / output terminal 34A with a cable 341, and is connected to the first input / output terminal 33B of the microphone unit 2B via the cable 341.
- the shapes of the housing 101A and the housing 201A are not limited to the rectangular parallelepiped shape.
- the housing 101A of the host device 1 may be an elliptic cylinder
- the housing 201A of the microphone unit 2A may be a columnar shape.
- the signal processing system according to the present embodiment has an appearance of a cascade connection as shown in FIG. 5A, but can electrically realize a star connection. Hereinafter, this point will be described.
- FIG. 6A is a schematic block diagram showing signal connection.
- the hardware configuration of each microphone unit is the same. First, the configuration and function of the microphone unit 2A will be described with reference to FIG. 6B as a representative.
- the microphone unit 2A includes an FPGA 31A, a first input / output terminal 33A, and a second input / output terminal 34A in addition to the DSP 22A shown in FIG.
- the FPGA 31A implements a physical circuit as shown in FIG. That is, the FPGA 31A physically connects the first channel of the first input / output terminal 33A and the DSP 22A.
- the FPGA 31A physically connects one of the sub-channels other than the first channel of the first input / output terminal 33A to another channel adjacent to the channel corresponding to the sub-channel of the second input / output terminal 34A.
- the second channel of the first input / output terminal 33A and the first channel of the second input / output terminal 34A are connected, and the third channel of the first input / output terminal 33A and the second channel of the second input / output terminal 34A are connected.
- 2 channels are connected, the fourth channel of the first input / output terminal 33A is connected to the third channel of the second input / output terminal 34A, and the fifth channel of the first input / output terminal 33A is connected to the second channel.
- the fourth channel of the input / output terminal 34A is connected.
- the fifth channel of the second input / output terminal 34A is not connected anywhere.
- the signal (ch. 1) of the first channel of the host device 1 is input to the DSP 22A of the microphone unit 2A.
- the signal (ch. 2) of the second channel of the host device 1 is transmitted from the second channel of the first input / output terminal 33A of the microphone unit 2A to the first of the microphone unit 2B.
- the signal is input to the first channel of the input / output terminal 33B and input to the DSP 22B.
- the signal (ch. 3) of the third channel passes through the second channel of the first input / output terminal 33B of the microphone unit 2B from the third channel of the first input / output terminal 33A, and then the first input / output terminal 33C of the microphone unit 2C. To the first channel and to the DSP 22C.
- the fourth channel audio signal (ch.4) is transmitted from the fourth channel of the first input / output terminal 33A to the third channel of the first input / output terminal 33B of the microphone unit 2B and the second channel of the microphone unit 2C.
- the signal is input to the first channel of the first input / output terminal 33D of the microphone unit 2D through the second channel of the first input / output terminal 33C and input to the DSP 22D.
- the audio signal (ch. 5) of the fifth channel is transmitted from the fifth channel of the first input / output terminal 33A to the fourth channel of the first input / output terminal 33B of the microphone unit 2B and from the first input / output terminal 33C of the microphone unit 2C.
- the signal is input to the first channel of the first input / output terminal 33E of the microphone unit 2E via the third channel and the second channel of the first input / output terminal 33D of the microphone unit 2D, and then input to the DSP 22E.
- the microphone units connected in series via the cable can be detachable, and there is no need to consider the connection order.
- the connection position between the microphone unit 2A and the microphone unit 2E is temporarily assumed. A description will be given of a program transmitted to each microphone unit when the two are replaced.
- the first input / output terminal 33E of the microphone unit 2E is connected to the communication I / F 11 of the host device 1 via the cable 331, and the second input / output terminal 34E is connected to the first input / output terminal 33E of the microphone unit 2B via the cable 341. 1 is connected to the input / output terminal 33B.
- the first input / output terminal 33A of the microphone unit 2A is connected to the second input / output terminal 34D of the microphone unit 2D via the cable 371.
- the echo canceller program is transmitted to the microphone unit 2E, and the noise canceller program is transmitted to the microphone unit 2A.
- the echo canceller program is always executed on the microphone unit closest to the host apparatus 1, and the noise canceller program is executed on the microphone unit farthest from the host apparatus 1.
- the host device 1 recognizes the connection order of each microphone unit, and transmits an echo canceller program to the microphone unit within a certain distance from the own device based on the connection order and the cable length. It is also possible to send a noise canceller program to the microphone unit beyond a certain distance. For example, when a dedicated cable is used, information regarding the cable length is stored in advance in the host device. It also sets identification information for each cable, stores identification information and information about the length of the cable, and receives the identification information from each used cable, thereby reducing the length of each used cable. It is also possible to know.
- the echo canceller close to the host device 1 can increase the number of filter coefficients (the number of taps) to cope with echoes with long reverberation.
- the number of filter coefficients the number of taps
- a program that performs non-linear processing (for example, the above-described echo suppressor program) is sent to the microphone unit within a certain distance from the device itself, and there are echo components that cannot be removed by the echo canceller. Even if it occurs, it is possible to adopt a mode in which the echo component is removed.
- the microphone unit is described as selecting either the noise canceller or the echo canceller.
- both the noise canceller and the echo canceller programs are transmitted to the microphone unit close to the host device 1 and the host Only the noise canceller program may be transmitted to the microphone unit far from the apparatus 1.
- each microphone unit when each microphone unit outputs an audio signal to the host device 1, the audio signal of each channel is output individually from each microphone unit. can do.
- a physical circuit is realized by an FPGA
- the present invention is not limited to an FPGA as long as the above-described physical circuit can be realized.
- a dedicated IC may be prepared in advance, or wiring may be provided in advance.
- FIG. 7 is a schematic block diagram showing the configuration of the microphone unit when converting serial data and parallel data.
- the microphone unit 2 ⁇ / b> A is illustrated and described as a representative, but all the microphone units have the same configuration and function.
- the microphone unit 2A includes an FPGA 51A instead of the FPGA 31A shown in FIGS. 6 (A) and 6 (B).
- the FPGA 51A includes a physical circuit 501A corresponding to the above-described FPGA 31A, a first conversion unit 502A and a second conversion unit 503A that convert serial data and parallel data.
- the first input / output terminal 33A and the second input / output terminal 34A input / output a plurality of channels of audio signals as serial data.
- the DSP 22A outputs the audio signal of the first channel to the physical circuit 501A as parallel data.
- the physical circuit 501A outputs the parallel data of the first channel output from the DSP 22A to the first conversion unit 502A. Further, the physical circuit 501A outputs the second channel parallel data (corresponding to the output signal of the DSP 22B) and the third channel parallel data (corresponding to the output signal of the DSP 22C) output from the second conversion unit 503A. The fourth channel parallel data (corresponding to the output signal of the DSP 22D) and the fifth channel parallel data (corresponding to the output signal of the DSP 22E) are output to the first converter 502A.
- FIG. 8A is a conceptual diagram showing conversion between serial data and parallel data.
- the parallel data includes a bit clock (BCK) for synchronization, a word clock (WCK), and signals SDO0 to SDO4 for each channel (5 channels).
- BCK bit clock
- WCK word clock
- Serial data consists of a sync signal and a data part.
- the data portion includes a word clock, signals SDO0 to SDO4 of each channel (5 channels), and an error correction code CRC.
- the first converter 502A receives parallel data as shown in the upper column of FIG. 8A from the physical circuit 501A.
- the first conversion unit 502A converts the parallel data into serial data as shown in the lower column of FIG.
- serial data is output to the first input / output terminal 33 ⁇ / b> A and input to the host device 1.
- the host device 1 processes the audio signal of each channel based on the input serial data.
- the second conversion unit 503A receives serial data as shown in the lower column of FIG. 8A from the first conversion unit 502B of the microphone unit 2B, and converts the parallel data as shown in the upper column of FIG. 8A. The data is converted and output to the physical circuit 501A.
- the SDO0 signal output from the second conversion unit 503A is output to the first conversion unit 502A as the SDO1 signal by the physical circuit 501A, and the second conversion unit 503A outputs the signal.
- the SDO1 signal to be output is output to the first conversion unit 502A as the SDO2 signal
- the SDO2 signal output from the second conversion unit 503A is output to the first conversion unit 502A as the SDO3 signal
- the second conversion unit 503A The signal of SDO3 output from is output to the first conversion unit 502A as the signal of SDO4.
- the first channel audio signal (ch. 1) output from the DSP 22A is input to the host device 1 as the first channel audio signal and output from the DSP 22B.
- the second channel audio signal (ch. 2) is input to the host device 1 as the second channel audio signal
- the third channel audio signal (ch. 3) output from the DSP 22C is transmitted to the host device 1 by the third channel.
- the fourth channel audio signal (ch. 4) input as the channel audio signal and output from the DSP 22D is input to the host device 1 as the fourth channel audio signal and output from the DSP 22E of the microphone unit 2E.
- Audio signal (ch.5) is input to the host device 1 as a fifth channel audio signal.
- the DSP 22E of the microphone unit 2E processes the sound collected by the microphone 25E of its own device by the sound signal processing unit 24A, and divides the processed sound into unit bit data (signal SDO4) as a physical circuit 501E. Output to.
- the physical circuit 501E outputs the signal SDO4 to the first conversion unit 502E as parallel data having the first channel signal.
- the first conversion unit 502E converts the parallel data into serial data.
- the serial data includes the first unit bit data (signal SDO4 in the figure) and bit data 0 (indicated by a hyphen “-” in the figure) in order from the word clock. And an error correction code CRC.
- Such serial data is output from the first input / output terminal 33E and input to the microphone unit 2D.
- the second conversion unit 503D of the microphone unit 2D converts the input serial data into parallel data and outputs the parallel data to the physical circuit 501D. Then, the physical circuit 501D outputs the signal SDO4 included in the parallel data to the first conversion unit 502D as the second channel signal and the signal SDO3 input from the DSP 22D as the first channel signal. As shown in the third column from the top in FIG. 9, the first conversion unit 502D inserts the signal SDO3 as the first unit bit data following the word clock, and converts the signal SDO4 into serial data having the second unit bit data. To do. In addition, the first conversion unit 502D newly generates an error correction code CRC in this case (when the signal SDO3 is the first and the signal SDO4 is the second), and assigns and outputs the error correction code CRC to the serial data.
- CRC error correction code
- Such serial data is output from the first input / output terminal 33D and input to the microphone unit 2C. Similar processing is performed in the microphone unit 2C. As a result, the microphone unit 2C inserts the signal SDO2 as the first unit bit data following the word clock, the signal SDO3 as the second unit bit data, the signal SDO4 as the third unit bit data, and new error correction.
- the serial data with the code CRC is output.
- the serial data is input to the microphone unit 2B. Similar processing is performed in the microphone unit 2B.
- the microphone unit 2B inserts the signal SDO1 as the first unit bit data following the word clock, the signal SDO2 as the second unit bit data, the signal SDO3 as the third unit bit data, and the signal SDO4 as 4 Serial data with a new error correction code CRC is output as the first unit bit data.
- the serial data is input to the microphone unit 2A. Similar processing is performed in the microphone unit 2A. As a result, the microphone unit 2A inserts the signal SDO0 as the first unit bit data following the word clock, the signal SDO1 as the second unit bit data, the signal SDO2 as the third unit bit data, and the signal SDO3 as 4 Serial data with a new error correction code CRC is output with the signal bit SDO4 as the fifth unit bit data. Then, the serial data is input to the host device 1.
- the first channel audio signal (ch. 1) output from the DSP 22A is input to the host device 1 as the first channel audio signal, and the DSP 22B
- the second channel audio signal (ch. 2) output from the second channel is input to the host device 1 as the second channel audio signal
- the third channel audio signal (ch. 3) output from the DSP 22C is the host device 1.
- the fourth channel audio signal (ch. 4) input as the third channel audio signal and output from the DSP 22D is input to the host device 1 as the fourth channel audio signal and output from the DSP 22E of the microphone unit 2E.
- the fifth channel audio signal (ch. 5) is input to the host device 1 as the fifth channel audio signal.
- each microphone unit divides the audio signal processed by each DSP into fixed unit bit data and transmits it to the microphone unit connected to the higher level, and each microphone unit cooperates to create serial data for transmission. Will do.
- FIG. 10 is a diagram showing a signal flow when the host apparatus 1 transmits an individual audio signal processing program to each microphone unit. In this case, processing reverse to the signal flow shown in FIG. 9 is performed.
- the host device 1 reads out the audio signal processing program to be transmitted to each microphone unit from the nonvolatile memory 14 by dividing the audio signal processing program into constant unit bit data, and reads serial data in which the unit bit data is arranged in the order received by each microphone unit. create.
- the serial data includes the signal SDO0 as the first unit bit data following the word clock, the signal SDO1 as the second unit bit data, the signal SDO2 as the third unit bit data, the signal SDO3 as the fourth unit bit data, and the fifth unit bit data.
- a signal SDO4 and an error correction code CRC are given as unit bit data.
- the serial data is first input to the microphone unit 2A.
- the signal SDO0 which is the first unit bit data is extracted from the serial data, and the extracted unit bit data is input to the DSP 22A and temporarily stored in the volatile memory 23A.
- the microphone unit 2A has a signal SDO1, a signal SDO2 as the second unit bit data, a signal SDO3 as the third unit bit data, a signal SDO4 as the fourth unit bit data, And serial data to which a new error correction code CRC is added is output.
- the fifth unit bit data is set to 0 (hyphen “ ⁇ ” in the figure).
- the serial data is input to the microphone unit 2B.
- the signal SDO1 which is the first unit bit data, is input to the DSP 22B.
- the microphone unit 2B gives the signal SDO2 as the first unit bit data following the word clock, the signal SDO3 as the second unit bit data, the signal SDO4 as the third unit bit data, and a new error correction code CRC.
- the serial data is input to the microphone unit 2C.
- the signal SDO2 which is the first unit bit data
- the DSP 22C receives the signal SDO3 as the first unit bit data following the word clock
- the signal SDO4 as the second unit bit data
- the serial data to which the new error correction code CRC is added is added.
- the serial data is input to the microphone unit 2D.
- the signal SDO3 that is the first unit bit data is input to the DSP 22D.
- the microphone unit 2D outputs the signal SDO4 and serial data to which a new error correction code CRC is added as the first unit bit data following the word clock.
- the serial data is input to the microphone unit 2E, and the signal SDO4, which is the first unit bit data, is input to the DSP 22E.
- the first unit bit data (signal SDO0) is always transmitted to the microphone unit connected to the host apparatus 1, and the second unit bit data (signal SDO0) is always transmitted to the second microphone unit.
- the signal SDO1) is transmitted
- the third unit bit data (signal SDO2) is always transmitted to the third connected microphone unit
- the fourth unit bit is always transmitted to the fourth connected microphone unit.
- Data (signal SDO3) is transmitted
- the fifth unit bit data (signal SDO4) is always transmitted to the fifth connected microphone unit.
- Each microphone unit performs processing according to an audio signal processing program that combines unit bit data. Even in this case, the microphone units connected in series via the cable can be detachable, and there is no need to consider the order of connection. For example, when an echo canceller program is transmitted to the microphone unit 2A closest to the host device 1 and a noise canceller program is transmitted to the microphone unit 2E farthest from the host device 1, the connection position between the microphone unit 2A and the microphone unit 2E is temporarily assumed. Are replaced, the echo canceller program is transmitted to the microphone unit 2E, and the noise canceller program is transmitted to the microphone unit 2A. As described above, even if the connection order is changed, the echo canceller program is always executed on the microphone unit closest to the host apparatus 1, and the noise canceller program is executed on the microphone unit farthest from the host apparatus 1.
- the operation at the time of starting the host device 1 and each microphone unit will be described.
- the CPU 12 of the host device 1 reads a predetermined audio signal processing program from the nonvolatile memory 14 (S12), via the communication I / F 11 It transmits to each microphone unit (S13).
- the CPU 12 of the host device 1 divides the audio signal processing program into fixed unit bit data as described above, creates serial data in which the unit bit data is arranged in the order received by each microphone unit, and sends it to the microphone unit. Send.
- Each microphone unit receives the audio signal processing program transmitted from the host device 1 (S21) and temporarily stores it (S22). At this time, each microphone unit extracts and receives the unit bit data to be received from the serial data, and temporarily stores the extracted unit bit data. The microphone unit combines the temporarily stored unit bit data and performs processing according to the combined audio signal processing program (S23). Each microphone unit then transmits a digital audio signal related to the collected audio to the host device 1 (S24). At this time, the digital audio signal processed by the audio signal processing unit of each microphone unit is divided into fixed unit bit data and transmitted to the microphone unit connected to the higher level, and each microphone unit cooperates to transmit serial data. And transmitting the serial data for transmission to the host device.
- serial data is converted in the minimum bit unit, but the conversion is not limited to the minimum bit unit, for example, conversion is performed for each word.
- the bit data of the channel is not deleted and serial data is deleted. It is included and transmitted. For example, when the number of microphone units is four, the signal SDO4 always has bit data 0, but the signal SDO4 is transmitted as a bit data 0 signal without being deleted. Therefore, it is not necessary to consider which device corresponds to which channel, connection relationship, address information such as which data is transmitted to which device, and the connection order of each microphone unit is assumed. Even if they are switched, signals of appropriate channels are output from the respective microphone units.
- the detection means for detecting the activation state of the microphone unit can detect the activation state by detecting the connection of the cable, but may detect the microphone unit connected when the power is turned on. In addition, when a new microphone unit is added during use, it is possible to detect the connection of the cable and detect the activation state. In this case, the program of the connected microphone unit can be deleted, and the sound processing program can be transmitted again from the main body to all the microphone units.
- FIG. 12 is a configuration diagram of a signal processing system according to an application example.
- the signal processing system according to the application example includes slave units 10A to 10E connected in series and a master unit (host device) 1 connected to the slave unit 10A.
- FIG. 13 is an external perspective view of the child device 10A.
- FIG. 14 is a block diagram showing the configuration of the slave unit 10A.
- the host device 1 is connected to the child device 10 ⁇ / b> A via a cable 331.
- the slave unit 10A and the slave unit 10B are connected via a cable 341.
- the slave unit 10B and the slave unit 10C are connected via a cable 351.
- the slave unit 10C and the slave unit 10D are connected via a cable 361.
- the slave unit 10D and the slave unit 10E are connected via a cable 371.
- the slave units 10A to 10E have the same configuration. Therefore, in the following description of the configuration of the slave unit, the slave unit 10A will be described as a representative.
- the hardware configuration of each slave unit is the same.
- Slave device 10A has the same configuration and function as microphone unit 2A described above. However, handset 10A includes a plurality of microphones MICa to MICm instead of microphone 25A.
- the audio signal processing unit 24A of the DSP 22A includes configurations of an amplifier 11a to an amplifier 11m, a coefficient determination unit 120, a synthesis unit 130, and an AGC 140.
- the number of microphones may be two or more, and can be set as appropriate according to the sound collection specifications of one slave unit. Accordingly, the number of amplifiers may be the same as the number of microphones. For example, three microphones are sufficient to collect sound with a small number in the circumferential direction.
- Each microphone MICa to microphone MICm has a different sound collection direction. That is, each of the microphones MICa to MICm has a predetermined sound collection directivity and collects sound with a specific direction as a main sound collection direction, and generates a sound collection signal Sma to a sound collection signal Smm. Specifically, for example, the microphone MICa collects sound with the first specific direction as the main sound collection direction, and generates a sound collection signal Sma. Similarly, the microphone MICb collects sound with the second specific direction as the main sound collection direction, and generates a sound collection signal Smb.
- the microphones MICA to MICm are installed in the slave unit 10A so that their sound collection directivities are different.
- each of the microphones MICa to MICm is installed in the child device 10A so that the main sound collection direction is different.
- the sound collection signals Sma to Smm output from the microphones MICa to MICm are input to the amplifiers 11a to 11m, respectively.
- the collected sound signal Sma output from the microphone MICa is input to the amplifier 11a
- the collected sound signal Smb output from the microphone MICb is input to the amplifier 11b.
- the collected sound signal Smm output from the microphone MICm is input to the amplifier 11m.
- each of the collected sound signals Sma to Smm is input to the coefficient determining unit 120. At this time, the collected sound signals Sma to Smm are converted from analog signals to digital signals and then input to the amplifiers 11a to 11m.
- the coefficient determination unit 120 detects the signal power of the sound collection signal Sma to the sound collection signal Smm. The signal power of each of the sound collection signals Sma to Smm is compared, and the sound collection signal having the maximum power is detected. The coefficient determination unit 120 sets the gain coefficient for the collected sound signal detected as the maximum power to “1”. The coefficient determination unit 120 sets “0” as the gain coefficient for the collected sound signal other than the detected sound collected signal with the maximum power.
- the coefficient determination unit 120 detects the maximum power and the signal level of the detected sound pickup signal, and generates level information IFO10A.
- the coefficient determination unit 120 outputs the level information IFO10A to the FPGA 51A.
- the amplifiers 11a to 11m are amplifiers capable of gain adjustment.
- the amplifiers 11a to 11m amplify the collected sound signals Sma to Smm with the gain coefficients given from the coefficient determining unit 120, and generate amplified sound collected signals Smga to Smgm, respectively.
- the amplifier 11a amplifies the sound collection signal Sma with the gain coefficient from the coefficient determination unit 120, and outputs the amplified sound collection signal Smg.
- the amplifier 11b amplifies the sound collection signal Smb with the gain coefficient from the coefficient determination unit 120, and outputs the amplified sound collection signal Smgb.
- the amplifier 11m amplifies the sound collection signal Smm with the gain coefficient from the coefficient determination unit 120, and outputs the amplified sound collection signal Smgm.
- the amplified sound pickup signal is a signal having a signal level “0”.
- the amplified sound pickup signals Smga to Smgm are input to the synthesis unit 130.
- the synthesizing unit 130 is an adder, and generates the child machine audio signal Sm10A by adding the amplified sound pickup signals Smga to the amplified sound pickup signals Smgm.
- the amplified sound pickup signal Smga to the amplified sound pickup signal Smgm only those having the maximum power of the sound pickup signal Sma to the sound pickup signal Smm that are the sources of the amplified sound pickup signals Smga to Smgm are collected.
- the signal level is in accordance with the signal, and the others have a signal level of “0”.
- the handset audio signal Sm10A obtained by adding the amplified sound pickup signal Smga to the amplified sound pickup signal Smgm becomes the maximum power and the detected sound pickup signal itself.
- the sound collecting signal with the maximum power changes, that is, if the sound source of the sound collecting signal with the maximum power moves, the sound collecting signal that becomes the slave unit audio signal Sm10A also changes in accordance with this change and movement. .
- the sound source can be tracked based on the sound collection signal of each microphone, and the child device sound signal Sm10A that has collected the sound from the sound source most efficiently can be output.
- the AGC 140 is a so-called auto gain control amplifier, which amplifies the handset audio signal Sm10A with a predetermined gain and outputs it to the FPGA 51A.
- the gain set by the AGC 140 is appropriately set according to communication specifications. Specifically, for example, the gain set by the AGC 140 is set so that the transmission loss is estimated in advance and the transmission loss is compensated.
- the slave unit audio signal Sm10A By performing the gain control of the slave unit audio signal Sm10A, the slave unit audio signal Sm10A can be accurately and reliably transmitted from the slave unit 10A to the host device 1. Thereby, the host device 1 can receive and demodulate the slave unit audio signal Sm10A accurately and reliably.
- the handset audio signal Sm10A after AGC and the level information IFO10A are input to the FPGA 51A.
- the FPGA 51A generates handset data D10A from the handset voice signal Sm10A after AGC and the level information IFO10A, and transmits it to the host device 1.
- the level information IFO10A is data synchronized with the slave unit audio signal Sm10A assigned to the same slave unit data.
- FIG. 16 is a diagram showing an example of the data format of handset data transmitted from the handset to the host device.
- the slave unit data D10A is data in which a header DH, a slave unit audio signal Sm10A, and level information IFO10A that can be identified by the slave unit that is the transmission source are respectively assigned a predetermined number of bits.
- the slave unit audio signal Sm10A is assigned a predetermined bit after the header DH
- the level information IFO10A is assigned a predetermined bit after the bit string of the slave unit audio signal Sm10A.
- the other child devices 10B to 10E also include child device data D10B to child device data including child device audio signal Sm10B to child device audio signal Sm10E and level information IFO10B to level information IFO10E, respectively.
- D10E is generated and output to the host device 1.
- the slave unit data D10B to the slave unit data D10E are each divided into fixed unit bit data and transmitted to the slave unit connected to the host, so that each slave unit cooperates to create serial data. become.
- FIG. 17 is a block diagram showing various configurations realized by the CPU 12 of the host device 1 executing a predetermined audio signal processing program.
- the CPU 12 of the host device 1 includes a plurality of amplifiers 21a to 21e, a coefficient determination unit 220, and a synthesis unit 230.
- the communication device I / F 11 receives the child device data D10A to the child device data D10E from the child devices 10A to 10E.
- the communication I / F 11 demodulates the slave unit data D10A to the slave unit data D10E, and acquires the slave unit audio signal Sm10A to the slave unit audio signal Sm10E, and the respective level information Ifo10A to level information Ifo10E.
- the communication I / F 11 outputs the slave unit audio signal Sm10A to the slave unit audio signal Sm10E to the amplifier 21a to the amplifier 21e, respectively. Specifically, the communication I / F 11 outputs the slave unit audio signal Sm10A to the amplifier 21a and outputs the slave unit audio signal Sm10B to the amplifier 21b. Similarly, the communication I / F 11 outputs the slave unit audio signal Sm10E to the amplifier 21e.
- the communication I / F 11 outputs the level information IFO10A to the level information IFO10E to the coefficient determination unit 220.
- the coefficient determination unit 220 compares the level information Ifo10A to the level information Ifo10E and detects the maximum level information.
- the coefficient determination unit 220 sets the gain coefficient for the child unit audio signal corresponding to the detected maximum level and level information to “1”.
- the coefficient determination unit 220 sets the gain coefficient for the collected sound signal other than the handset audio signal corresponding to the detected level information as the maximum level to “0”.
- the amplifiers 21a to 21e are amplifiers capable of gain adjustment.
- the amplifiers 21a to 21e amplify the slave unit audio signal Sm10A to the slave unit audio signal Sm10E with the gain coefficient given from the coefficient determination unit 220, and generate the amplified audio signal Smg10A to the amplified audio signal Smg10E, respectively. .
- the amplifier 21a amplifies the handset audio signal Sm10A with the gain coefficient from the coefficient determination unit 220, and outputs the amplified audio signal Smg10A.
- the amplifier 21b amplifies the child device audio signal Sm10B with the gain coefficient from the coefficient determining unit 220, and outputs the amplified audio signal Smg10B.
- the amplifier 21e amplifies the handset audio signal Sm10E with the gain coefficient from the coefficient determination unit 220, and outputs the amplified audio signal Smg10E.
- the amplifier to which gain coefficient “1” is output while maintaining the signal level of the slave unit audio signal as it is. . In this case, the amplified audio signal remains the slave audio signal.
- the amplified audio signal is a signal having a signal level “0”.
- the amplified audio signal Smg10A to the amplified audio signal Smg10E are input to the synthesis unit 230.
- the synthesizer 230 is an adder and generates a tracking audio signal by adding the amplified audio signals Smg10A to Smg10E.
- the amplified audio signal Smg10A to the amplified audio signal Smg10E are only those having the maximum level of the slave unit audio signal Sm10A to the slave unit audio signal Sm10E that are the sources of the amplified audio signals Smg10A to Smg10E.
- the signal level is in accordance with the signal, and the others have a signal level of “0”.
- the tracking audio signal obtained by adding the amplified audio signal Smg10A to the amplified audio signal Smg10E is the child unit audio signal itself detected as the maximum level.
- the child device 10A to child device 10E perform the first-stage sound source tracking based on the collected sound signal of the microphone, and the host device 1 performs each child device 10A to child device.
- Second-stage sound source tracking is performed by the handset audio signal of the machine 10E.
- sound source tracking can be realized by the plurality of microphones MICa to MICm of the plurality of slave units 10A to 10E. Therefore, by appropriately setting the number and arrangement pattern of the slave units 10A to 10E, the sound source tracking can be reliably performed without being affected by the size of the sound collection range or the sound source position of a speaker or the like. it can. For this reason, the sound from the sound source can be collected with high quality without depending on the position of the sound source.
- the number of audio signals transmitted from the slave unit 10A to the slave unit 10E is one without depending on the number of microphones attached to the slave unit. Therefore, the amount of communication data can be reduced as compared with the case where the collected sound signals of all microphones are transmitted to the host device. For example, when the number of microphones attached to each slave unit is m, the number of audio data transmitted from each slave unit to the host device is (1 / m) when all collected sound signals are transmitted to the host device. )
- FIG. 18 is a flowchart of the sound source tracking process of the slave unit according to the embodiment of the present invention.
- the processing flow of one slave unit will be described, but a plurality of slave units execute the same flow of processing.
- the details of the processing are described above, detailed description thereof will be omitted below.
- the slave unit collects sound with each microphone and generates a collected sound signal (S101).
- the slave unit detects the level of the collected sound signal of each microphone (S102).
- the handset detects the maximum power collected signal and generates level information of the maximum power collected signal (S103).
- the slave unit determines a gain coefficient for each collected sound signal (S104). Specifically, the slave unit sets the gain of the collected sound signal with the maximum power to “1”, and sets the gains of the other collected sound signals to “0”.
- the slave unit amplifies each sound collection signal with the determined gain coefficient (S105).
- the slave unit synthesizes the amplified sound pickup signals to generate a slave unit voice signal (S106).
- the slave unit performs AGC processing on the slave unit voice signal (S107), generates slave unit data including the slave unit voice signal and level information after the AGC process, and outputs the slave unit data to the host device (S108).
- FIG. 19 is a flowchart of the sound source tracking process of the host device according to the embodiment of the present invention. In addition, since the details of the processing are described above, detailed description thereof will be omitted below.
- the host device 1 receives handset data from each handset, and obtains handset audio signals and level information (S201). The host device 1 compares the level information from each slave unit and detects the maximum level slave unit audio signal (S202).
- the host device 1 determines a gain coefficient for each slave unit audio signal (S203). Specifically, the host apparatus 1 sets the gain of the child machine audio signal at the maximum level to “1”, and sets the gains of the other child machine audio signals to “0”.
- the host device 1 amplifies each slave unit audio signal with the determined gain coefficient (S204).
- the host device 1 synthesizes the amplified handset audio signal and generates a tracking audio signal (S205).
- the gain coefficient of the original maximum power sound collection signal is set from “1” to “0” at the timing when the maximum power sound collection signal is switched, and the gain of the new maximum power sound collection signal is set. The coefficient is switched from “0” to “1”.
- these gain coefficients may be changed in more detailed steps. For example, the gain coefficient of the original maximum power sound pickup signal is gradually decreased from “1” to “0”, and the gain coefficient of the new maximum power sound pickup signal is changed from “0” to “1”. So as to gradually increase. That is, the cross-fading process may be performed from the original maximum power collected signal to the new maximum power collected signal. At this time, the sum of these gain coefficients is set to “1”.
- Such a cross-fade process may be applied not only to the synthesis of the collected sound signal performed in the slave unit but also to the synthesis of the slave unit audio signal performed in the host device 1.
- the AGC may be provided in the host device 1.
- AGC may be performed by the communication I / F 11 of the host device 1.
- the host device 1 can emit test sound waves from the speaker 102 toward each child device and cause each child device to determine the level of the test sound wave.
- the host device 1 detects the activation state of the child device (S51)
- the host device 1 reads the level determination program from the nonvolatile memory 14 (S52) and transmits it to each child device via the communication I / F 11 (S53).
- the CPU 12 of the host device 1 divides the level determination program into fixed unit bit data, creates serial data in which the unit bit data is arranged in the order received by each slave unit, and transmits the serial data to the slave unit.
- Each slave unit receives the level determination program transmitted from the host device 1 (S71).
- the level determination program is temporarily stored in the volatile memory 23A (S72).
- each slave unit extracts and receives the unit bit data to be received from the serial data, and temporarily stores the extracted unit bit data.
- Each slave unit combines the unit bit data temporarily stored and executes the combined level determination program (S73).
- the audio signal processing unit 24 realizes the configuration shown in FIG.
- the level determination program since the level determination program only performs level determination and does not require generation and transmission of the child device audio signal Sm10A, the amplifier 11a to the amplifier 11m, the coefficient determination unit 120, the synthesis unit 130, and the AGC 140 No configuration is necessary.
- the coefficient determination unit 220 of each slave unit functions as a sound level determination unit, and determines the level of the test sound wave input to the plurality of microphones MICa to MICm (S74).
- the coefficient determination unit 220 transmits level information (level data) as a determination result to the host device 1 (S75).
- the level data may be transmitted for each of the plurality of microphones MICa to MICm, or only the level data indicating the maximum level may be transmitted for each slave unit.
- the level data is divided into fixed unit bit data and transmitted to the slave unit connected to the higher level, whereby each slave unit cooperates to create level determination serial data.
- the host device 1 receives level data from each slave unit (S55).
- the host device 1 selects an audio signal processing program to be transmitted to each slave unit based on the received level data, and reads out these programs from the nonvolatile memory 14 (s56). For example, a slave unit having a high test sound wave level is judged to have a high echo level, and an echo canceller program is selected. Further, the slave unit having a low test sound wave level is judged to have a low echo level, and a noise canceller program is selected. Then, the host device 1 transmits the read audio signal processing program to each slave unit (s57). Subsequent processing is the same as the flowchart shown in FIG.
- the host device 1 may change the number of filter coefficients of each slave unit in the echo canceller program based on the received level data, and define a change parameter for changing the number of filter coefficients to each slave unit. Good. For example, the number of taps is increased for a slave unit having a high test sound wave level, and the number of taps is decreased for a slave unit having a low test sound wave level. In this case, the host device 1 divides the change parameter into fixed unit bit data, creates change parameter serial data in which the unit bit data is arranged in the order received by each slave unit, and transmits it to each slave unit.
- the echo canceller can be provided for each of the plurality of microphones MICa to MICm in each slave unit.
- the coefficient determination unit 220 of each slave unit transmits level data for each of the plurality of microphones MICa to MICm.
- level information IFO10A to level information Ifo10E may include microphone identification information in each slave unit.
- the slave unit when the slave unit detects the maximum power pickup signal and generates level information of the maximum power pickup signal (S801), the slave unit detects the maximum power of the microphone.
- the identification information is included in the level information and transmitted (S802).
- the host device 1 receives the level information from each slave unit (S901), and when the level information that is the maximum level is selected, the host device 1 is based on the microphone identification information included in the selected level information.
- the echo canceller being used is specified (S902).
- the host apparatus 1 makes a transmission request for each signal related to the echo canceller to the slave unit using the specified echo canceller (S903).
- the slave unit When receiving the transmission request (S803), the slave unit receives the pseudo-regression sound signal and the collected sound signal NE1 from the designated echo canceller with respect to the host device 1 (the echo component is removed by the preceding echo canceller).
- the previous sound collection signal) and the sound collection signal NE1 ′ (the sound collection signal after the echo component is removed by the previous echo canceller) are transmitted (S804).
- the host device 1 receives these signals (S904), and inputs the received signals to the echo suppressor (S905). Thereby, since the coefficient according to the learning progress of the specified echo canceller is set in the echo generation unit 125 of the echo suppressor, an appropriate residual echo component can be generated.
- the progress degree calculation unit 124 may be provided on the audio signal processing unit 24A side.
- the host device 1 requests the slave unit using the identified echo canceller to transmit a coefficient that changes in accordance with the learning progress in S903 in FIG.
- the slave unit reads the coefficient calculated by the progress level calculation unit 124 and transmits the coefficient to the host device 1.
- the echo generator 125 generates a residual echo component according to the received coefficient and the pseudo regression sound signal.
- FIG. 23 is a diagram showing a modified example related to the arrangement of the host device and the slave unit.
- FIG. 23A shows an example in which the child device 10C is the farthest from the host device 1 and the child device 10E is the closest to the host device 1 although it is the same as the connection mode shown in FIG. That is, the cable 361 connecting the child device 10C and the child device 10D is bent so that the child device 10D and the child device 10E come closer to the host device 1.
- the slave unit 10C is connected to the host device 1 via the cable 331.
- the slave unit 10C branches the data transmitted from the host device 1 and transmits the data to the slave unit 10B and the slave unit 10D. Further, the slave unit 10C transmits the data transmitted from the slave unit 10B, the data transmitted from the slave unit 10D, and the data of the own device to the host device 1 together.
- the host device is connected to any one of the plurality of slave units connected in series.
- the present invention is a Japanese patent application filed on November 12, 2012 (Japanese Patent Application No. 2012-248158), a Japanese patent application filed on November 13, 2012 (Japanese Patent Application No. 2012-249607), and an application filed on November 13, 2012 Japanese patent application (Japanese Patent Application No. 2012-249609), the contents of which are incorporated herein by reference.
- the terminal does not have a built-in operation program in advance, receives the program from the host device, temporarily stores it in the temporary storage memory, and then performs the operation. Therefore, it is not necessary to store a large number of programs in advance on the microphone unit side.
- the program rewriting process for each microphone unit is unnecessary, and a new function can be realized only by changing the program stored in the nonvolatile memory on the host device side. it can.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
- Telephone Function (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
Abstract
Description
このように、音声スペクトルに基づいてノイズスペクトルS(N’N(T))を推定することで、暗騒音等のノイズ成分を推定することができる。なお、推定部247は、マイク25Aが収音した収音信号のレベルが低い状態(無音状態)の場合のみ、ノイズスペクトルの推定処理を行うものとする。
2A,2B,2C,2D,2E…マイクユニット
11…通信I/F
12…CPU
13…RAM
14…不揮発性メモリ
21A…通信I/F
22A…DSP
23A…揮発性メモリ
24A…音声信号処理部
25A…マイク
Claims (7)
- 直列接続された複数のマイクユニットと、当該複数のマイクユニットの1つに接続されるホスト装置と、を備えた信号処理システムであって、
各マイクユニットは、音声を収音するマイクと、一時記憶用メモリと、前記マイクが収音した音声を処理する処理部と、を備え、
前記ホスト装置は、前記マイクユニット用の音声処理プログラムを記憶した不揮発性メモリを備え、
前記ホスト装置が、前記不揮発性メモリから読み出した前記音声処理プログラムを前記各マイクユニットへ送信し、
前記各マイクユニットは、前記一時記憶用メモリに前記音声処理プログラムを一時記憶し、
前記処理部は、前記一時記憶用メモリに一時記憶された音声処理プログラムに応じた処理を行い、当該処理後の音声を前記ホスト装置に送信することを特徴とする信号処理システム。 - 前記ホスト装置は、前記音声処理プログラムを一定の単位ビットデータに分割し、前記単位ビットデータを各マイクユニットが受け取る順に配列したシリアルデータを作成し、前記シリアルデータを前記各マイクユニットへ送信し、
前記各マイクユニットは、前記シリアルデータから自己が受け取るべき単位ビットデータを抜き出して受け取り、抜き出した前記単位ビットデータを一時記憶し、
前記処理部は、前記単位ビットデータを結合した音声処理プログラムに応じた処理を行う請求項1に記載の信号処理システム。 - 前記各マイクユニットは、前記処理後の音声を一定の単位ビットデータに分割して上位に接続されたマイクユニットに送信し、各マイクユニットは協同して送信用シリアルデータを作成し、前記ホスト装置に送信することを特徴とする請求項1または請求項2に記載の信号処理システム。
- 前記マイクユニットは、異なる収音方向を有する複数のマイクロホンと、音声レベル判定手段とを有し、
前記ホスト装置は、スピーカを有し、
該スピーカから各マイクユニットに向けて試験用音波を発し、
各マイクユニットは、前記複数のマイクロホンに入力された前記試験用音波のレベルを判定し、判定結果となるレベルデータを一定の単位ビットデータに分割して上位に接続されたマイクユニットに送信し、各マイクユニットが協同してレベル判定用シリアルデータを作成する請求項1乃至請求項3のいずれかに記載の信号処理システム。 - 前記音声処理プログラムは、フィルタ係数が更新されるエコーキャンセラを実現するためのエコーキャンセルプログラムからなり、該エコーキャンセルプログラムは前記フィルタ係数の数を決めるフィルタ係数設定部を有し、
前記ホスト装置は、各マイクユニットから受けとったレベルデータに基づいて各マイクユニットのフィルタ係数の数を変更し、各マイクユニットへフィルタ係数の数を変更するための変更パラメータを定め、該変更パラメータを一定の単位ビットデータに分割して、前記単位ビットデータを各マイクユニットが受け取る順に配列した変更パラメータ用シリアルデータを作成し、前記各マイクユニットへ前記変更パラメータ用シリアルデータを送信する請求項1乃至請求項4のいずれかに記載の信号処理システム。 - 前記音声処理プログラムは、前記エコーキャンセルプログラムまたはノイズ成分を除去するノイズキャンセルプログラムであり、
前記ホスト装置は、前記レベルデータから各マイクユニットへ送信するプログラムを前記エコーキャンセルプログラムまたは前記ノイズキャンセルプログラムのいずれかに定めることを特徴とする請求項5に記載の信号処理システム。 - 直列接続された複数のマイクユニットと、当該複数のマイクユニットの1つに接続されるホスト装置と、を備えた信号処理装置のための信号処理方法であって、各マイクユニットは、音声を収音するマイクと、一時記憶用メモリと、前記マイクが収音した音声を処理する処理部と、を備え、前記ホスト装置は、前記マイクユニット用の音声処理プログラムを保持した不揮発性メモリを備え、
当該信号処理方法は、
前記ホスト装置の起動状態を検知すると、前記不揮発性メモリから前記音声処理プログラムを読み出し、該音声処理プログラムを前記ホスト装置から前記各マイクユニットへ送信し、
前記音声処理プログラムを前記各マイクユニットの前記一時記憶用メモリに一時記憶し、
前記一時記憶用メモリに一時記憶された音声処理プログラムに応じた処理を行い、当該処理後の音声を前記ホスト装置に送信する、
ことを特徴とする信号処理方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020157001712A KR101706133B1 (ko) | 2012-11-12 | 2013-11-12 | 신호 처리 시스템 및 신호 처리 방법 |
EP19177298.7A EP3557880B1 (en) | 2012-11-12 | 2013-11-12 | Signal processing system and signal processing method |
AU2013342412A AU2013342412B2 (en) | 2012-11-12 | 2013-11-12 | Signal processing system and signal processing method |
EP13853867.3A EP2882202B1 (en) | 2012-11-12 | 2013-11-12 | Sound signal processing host device, signal processing system, and signal processing method |
KR1020177002958A KR20170017000A (ko) | 2012-11-12 | 2013-11-12 | 신호 처리 시스템 및 신호 처리 방법 |
EP21185333.8A EP3917161B1 (en) | 2012-11-12 | 2013-11-12 | Signal processing system and signal processing method |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-248158 | 2012-11-12 | ||
JP2012248158 | 2012-11-12 | ||
JP2012-249607 | 2012-11-13 | ||
JP2012249609 | 2012-11-13 | ||
JP2012-249609 | 2012-11-13 | ||
JP2012249607 | 2012-11-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014073704A1 true WO2014073704A1 (ja) | 2014-05-15 |
Family
ID=50681709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/080587 WO2014073704A1 (ja) | 2012-11-12 | 2013-11-12 | 信号処理システムおよび信号処理方法 |
Country Status (8)
Country | Link |
---|---|
US (3) | US9497542B2 (ja) |
EP (3) | EP2882202B1 (ja) |
JP (5) | JP6090121B2 (ja) |
KR (2) | KR20170017000A (ja) |
CN (2) | CN107172538B (ja) |
AU (1) | AU2013342412B2 (ja) |
CA (1) | CA2832848A1 (ja) |
WO (1) | WO2014073704A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220201125A1 (en) * | 2017-09-29 | 2022-06-23 | Dolby Laboratories Licensing Corporation | Howl detection in conference systems |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9699550B2 (en) | 2014-11-12 | 2017-07-04 | Qualcomm Incorporated | Reduced microphone power-up latency |
US9407989B1 (en) | 2015-06-30 | 2016-08-02 | Arthur Woodrow | Closed audio circuit |
JP6443554B2 (ja) * | 2015-08-24 | 2018-12-26 | ヤマハ株式会社 | 収音装置および収音方法 |
US10014137B2 (en) | 2015-10-03 | 2018-07-03 | At&T Intellectual Property I, L.P. | Acoustical electrical switch |
US9704489B2 (en) * | 2015-11-20 | 2017-07-11 | At&T Intellectual Property I, L.P. | Portable acoustical unit for voice recognition |
WO2017132958A1 (en) * | 2016-02-04 | 2017-08-10 | Zeng Xinxiao | Methods, systems, and media for voice communication |
DE102016113831A1 (de) * | 2016-07-27 | 2018-02-01 | Neutrik Ag | Verkabelungsanordnung |
US10387108B2 (en) * | 2016-09-12 | 2019-08-20 | Nureva, Inc. | Method, apparatus and computer-readable media utilizing positional information to derive AGC output parameters |
US10362412B2 (en) * | 2016-12-22 | 2019-07-23 | Oticon A/S | Hearing device comprising a dynamic compressive amplification system and a method of operating a hearing device |
CN106782584B (zh) * | 2016-12-28 | 2023-11-07 | 北京地平线信息技术有限公司 | 音频信号处理设备、方法和电子设备 |
KR101898798B1 (ko) * | 2017-01-10 | 2018-09-13 | 순천향대학교 산학협력단 | 다이버시티 기술을 적용한 주차보조용 초음파센서 시스템 |
CN106937009B (zh) * | 2017-01-18 | 2020-02-07 | 苏州科达科技股份有限公司 | 一种级联回声抵消系统及其控制方法及装置 |
JP7051876B6 (ja) * | 2017-01-27 | 2023-08-18 | シュアー アクイジッション ホールディングス インコーポレイテッド | アレイマイクロホンモジュール及びシステム |
EP3641141A4 (en) * | 2017-06-12 | 2021-02-17 | Audio-Technica Corporation | VOICE SIGNAL PROCESSING DEVICE, VOICE SIGNAL PROCESSING METHOD AND VOICE SIGNAL PROCESSING PROGRAM |
JP2019047148A (ja) * | 2017-08-29 | 2019-03-22 | 沖電気工業株式会社 | 多重化装置、多重化方法およびプログラム |
JP6983583B2 (ja) * | 2017-08-30 | 2021-12-17 | キヤノン株式会社 | 音響処理装置、音響処理システム、音響処理方法、及びプログラム |
CN107818793A (zh) * | 2017-11-07 | 2018-03-20 | 北京云知声信息技术有限公司 | 一种可减少无用语音识别的语音采集处理方法及装置 |
CN107750038B (zh) * | 2017-11-09 | 2020-11-10 | 广州视源电子科技股份有限公司 | 音量调节方法、装置、设备及存储介质 |
CN107898457B (zh) * | 2017-12-05 | 2020-09-22 | 江苏易格生物科技有限公司 | 一种团体无线脑电采集装置间时钟同步的方法 |
US11336999B2 (en) | 2018-03-29 | 2022-05-17 | Sony Corporation | Sound processing device, sound processing method, and program |
CN110611537A (zh) * | 2018-06-15 | 2019-12-24 | 杜旭昇 | 利用声波传送数据的广播系统 |
JP7158480B2 (ja) * | 2018-07-20 | 2022-10-21 | 株式会社ソニー・インタラクティブエンタテインメント | 音声信号処理システム、及び音声信号処理装置 |
CN111114475A (zh) * | 2018-10-30 | 2020-05-08 | 北京轩辕联科技有限公司 | 用于车辆的mic切换装置及方法 |
JP7373947B2 (ja) * | 2018-12-12 | 2023-11-06 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 音響エコーキャンセル装置、音響エコーキャンセル方法及び音響エコーキャンセルプログラム |
CN109803059A (zh) * | 2018-12-17 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | 音频处理方法和装置 |
KR102602942B1 (ko) * | 2019-01-07 | 2023-11-16 | 삼성전자 주식회사 | 오디오 정보 처리 장치의 위치에 기반하여 오디오 처리 알고리즘을 결정하는 전자 장치 및 방법 |
US11190871B2 (en) | 2019-01-29 | 2021-11-30 | Nureva, Inc. | Method, apparatus and computer-readable media to create audio focus regions dissociated from the microphone system for the purpose of optimizing audio processing at precise spatial locations in a 3D space |
CN110035372B (zh) * | 2019-04-24 | 2021-01-26 | 广州视源电子科技股份有限公司 | 扩声系统的输出控制方法、装置、扩声系统及计算机设备 |
JP7484105B2 (ja) | 2019-08-26 | 2024-05-16 | 大日本印刷株式会社 | チャック付き紙容器、その製造方法 |
CN110677777B (zh) * | 2019-09-27 | 2020-12-08 | 深圳市航顺芯片技术研发有限公司 | 一种音频数据处理方法、终端及存储介质 |
CN110830749A (zh) * | 2019-12-27 | 2020-02-21 | 深圳市创维群欣安防科技股份有限公司 | 一种视频通话回音消除电路、方法及会议平板 |
JP7365642B2 (ja) * | 2020-03-18 | 2023-10-20 | パナソニックIpマネジメント株式会社 | 音声処理システム、音声処理装置及び音声処理方法 |
CN111741404B (zh) * | 2020-07-24 | 2021-01-22 | 支付宝(杭州)信息技术有限公司 | 拾音设备、拾音系统和声音信号采集的方法 |
CN113068103B (zh) * | 2021-02-07 | 2022-09-06 | 厦门亿联网络技术股份有限公司 | 一种音频配件级联系统 |
EP4231663A4 (en) | 2021-03-12 | 2024-05-08 | Samsung Electronics Co., Ltd. | ELECTRONIC AUDIO INPUT DEVICE AND OPERATING METHOD THEREFOR |
CN114257908A (zh) * | 2021-04-06 | 2022-03-29 | 北京安声科技有限公司 | 耳机的通话降噪方法及装置、计算机可读存储介质及耳机 |
CN114257921A (zh) * | 2021-04-06 | 2022-03-29 | 北京安声科技有限公司 | 拾音方法及装置、计算机可读存储介质及耳机 |
CN113411719B (zh) * | 2021-06-17 | 2022-03-04 | 杭州海康威视数字技术股份有限公司 | 一种麦克风级联系统、麦克风及终端 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0983988A (ja) * | 1995-09-11 | 1997-03-28 | Nec Eng Ltd | テレビ会議システム |
JPH10276415A (ja) | 1997-01-28 | 1998-10-13 | Casio Comput Co Ltd | テレビ電話装置 |
JP2002190870A (ja) * | 2000-12-20 | 2002-07-05 | Audio Technica Corp | 赤外線双方向通信システム |
JP2004242207A (ja) | 2003-02-07 | 2004-08-26 | Matsushita Electric Works Ltd | インターホンシステム |
WO2006054778A1 (ja) * | 2004-11-17 | 2006-05-26 | Nec Corporation | 通信システム、通信端末装置、サーバ装置及びそれらに用いる通信方法並びにそのプログラム |
JP2006140930A (ja) * | 2004-11-15 | 2006-06-01 | Sony Corp | マイクシステムおよびマイク装置 |
JP2008147823A (ja) * | 2006-12-07 | 2008-06-26 | Yamaha Corp | 音声会議装置、音声会議システムおよび放収音ユニット |
Family Cites Families (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS596394U (ja) | 1982-07-06 | 1984-01-17 | 株式会社東芝 | 会議用マイクロホン装置 |
JPH0657031B2 (ja) | 1986-04-18 | 1994-07-27 | 日本電信電話株式会社 | 会議通話装置 |
EP0310456A3 (en) * | 1987-10-01 | 1990-12-05 | Sony Magnescale, Inc. | Improvements in audio mixing |
JPH0262606A (ja) * | 1988-08-29 | 1990-03-02 | Fanuc Ltd | Cncの診断方式 |
JP2562703B2 (ja) | 1989-12-27 | 1996-12-11 | 株式会社小松製作所 | 直列制御装置のデータ入力制御装置 |
JPH04291873A (ja) | 1991-03-20 | 1992-10-15 | Fujitsu Ltd | 電話会議システム |
US5664021A (en) * | 1993-10-05 | 1997-09-02 | Picturetel Corporation | Microphone system for teleconferencing system |
US5966639A (en) * | 1997-04-04 | 1999-10-12 | Etymotic Research, Inc. | System and method for enhancing speech intelligibility utilizing wireless communication |
JP2000115373A (ja) * | 1998-10-05 | 2000-04-21 | Nippon Telegr & Teleph Corp <Ntt> | 電話装置 |
US6785394B1 (en) * | 2000-06-20 | 2004-08-31 | Gn Resound A/S | Time controlled hearing aid |
JP2002043985A (ja) * | 2000-07-25 | 2002-02-08 | Matsushita Electric Ind Co Ltd | 音響エコーキャンセラー装置 |
JP3075809U (ja) * | 2000-08-23 | 2001-03-06 | 新世代株式会社 | カラオケ用マイク |
US20030120367A1 (en) * | 2001-12-21 | 2003-06-26 | Chang Matthew C.T. | System and method of monitoring audio signals |
JP2004128707A (ja) * | 2002-08-02 | 2004-04-22 | Sony Corp | 指向性を備えた音声受信装置およびその方法 |
JP4104626B2 (ja) | 2003-02-07 | 2008-06-18 | 日本電信電話株式会社 | 収音方法及び収音装置 |
EP1482763A3 (en) * | 2003-05-26 | 2008-08-13 | Matsushita Electric Industrial Co., Ltd. | Sound field measurement device |
US7496205B2 (en) * | 2003-12-09 | 2009-02-24 | Phonak Ag | Method for adjusting a hearing device as well as an apparatus to perform the method |
KR100662187B1 (ko) | 2004-03-15 | 2006-12-27 | 오므론 가부시키가이샤 | 센서 컨트롤러 |
JP2006048632A (ja) * | 2004-03-15 | 2006-02-16 | Omron Corp | センサコントローラ |
JP3972921B2 (ja) | 2004-05-11 | 2007-09-05 | ソニー株式会社 | 音声集音装置とエコーキャンセル処理方法 |
CN1780495A (zh) * | 2004-10-25 | 2006-05-31 | 宝利通公司 | 顶蓬麦克风组件 |
JP4258472B2 (ja) * | 2005-01-27 | 2009-04-30 | ヤマハ株式会社 | 拡声システム |
US7995768B2 (en) | 2005-01-27 | 2011-08-09 | Yamaha Corporation | Sound reinforcement system |
JP4818014B2 (ja) | 2005-07-28 | 2011-11-16 | 株式会社東芝 | 信号処理装置 |
US8335311B2 (en) | 2005-07-28 | 2012-12-18 | Kabushiki Kaisha Toshiba | Communication apparatus capable of echo cancellation |
US8577048B2 (en) * | 2005-09-02 | 2013-11-05 | Harman International Industries, Incorporated | Self-calibrating loudspeaker system |
JP4701931B2 (ja) * | 2005-09-02 | 2011-06-15 | 日本電気株式会社 | 信号処理の方法及び装置並びにコンピュータプログラム |
JP2007174011A (ja) | 2005-12-20 | 2007-07-05 | Yamaha Corp | 収音装置 |
JP4929740B2 (ja) | 2006-01-31 | 2012-05-09 | ヤマハ株式会社 | 音声会議装置 |
US20070195979A1 (en) * | 2006-02-17 | 2007-08-23 | Zounds, Inc. | Method for testing using hearing aid |
US8381103B2 (en) | 2006-03-01 | 2013-02-19 | Yamaha Corporation | Electronic device |
JP4844170B2 (ja) | 2006-03-01 | 2011-12-28 | ヤマハ株式会社 | 電子装置 |
CN1822709B (zh) * | 2006-03-24 | 2011-11-23 | 北京中星微电子有限公司 | 一种麦克风回声消除系统 |
JP4816221B2 (ja) | 2006-04-21 | 2011-11-16 | ヤマハ株式会社 | 収音装置および音声会議装置 |
JP2007334809A (ja) * | 2006-06-19 | 2007-12-27 | Mitsubishi Electric Corp | モジュール型電子機器 |
JP5012387B2 (ja) | 2007-10-05 | 2012-08-29 | ヤマハ株式会社 | 音声処理システム |
JP2009188858A (ja) * | 2008-02-08 | 2009-08-20 | National Institute Of Information & Communication Technology | 音声出力装置、音声出力方法、及びプログラム |
JP4508249B2 (ja) * | 2008-03-04 | 2010-07-21 | ソニー株式会社 | 受信装置および受信方法 |
EP2327015B1 (en) * | 2008-09-26 | 2018-09-19 | Sonova AG | Wireless updating of hearing devices |
JP5251731B2 (ja) | 2009-05-29 | 2013-07-31 | ヤマハ株式会社 | ミキシングコンソールおよびプログラム |
US20110013786A1 (en) | 2009-06-19 | 2011-01-20 | PreSonus Audio Electronics Inc. | Multichannel mixer having multipurpose controls and meters |
US8204198B2 (en) * | 2009-06-19 | 2012-06-19 | Magor Communications Corporation | Method and apparatus for selecting an audio stream |
JP5452158B2 (ja) * | 2009-10-07 | 2014-03-26 | 株式会社日立製作所 | 音響監視システム、及び音声集音システム |
US8792661B2 (en) * | 2010-01-20 | 2014-07-29 | Audiotoniq, Inc. | Hearing aids, computing devices, and methods for hearing aid profile update |
US8615091B2 (en) * | 2010-09-23 | 2013-12-24 | Bose Corporation | System for accomplishing bi-directional audio data and control communications |
EP2442587A1 (en) * | 2010-10-14 | 2012-04-18 | Harman Becker Automotive Systems GmbH | Microphone link system |
US8670853B2 (en) * | 2010-11-19 | 2014-03-11 | Fortemedia, Inc. | Analog-to-digital converter, sound processing device, and analog-to-digital conversion method |
JP2012129800A (ja) * | 2010-12-15 | 2012-07-05 | Sony Corp | 情報理装置および方法、プログラム、並びに情報処理システム |
JP2012234150A (ja) * | 2011-04-18 | 2012-11-29 | Sony Corp | 音信号処理装置、および音信号処理方法、並びにプログラム |
CN102324237B (zh) * | 2011-05-30 | 2013-01-02 | 深圳市华新微声学技术有限公司 | 麦克风阵列语音波束形成方法、语音信号处理装置及系统 |
JP5789130B2 (ja) | 2011-05-31 | 2015-10-07 | 株式会社コナミデジタルエンタテインメント | 管理装置 |
JP5701692B2 (ja) | 2011-06-06 | 2015-04-15 | 株式会社前川製作所 | 食鳥屠体の首皮取り装置及び方法 |
JP2012249609A (ja) | 2011-06-06 | 2012-12-20 | Kahuka 21:Kk | 害獣類侵入防止具 |
JP2013102370A (ja) * | 2011-11-09 | 2013-05-23 | Sony Corp | ヘッドホン装置、端末装置、情報送信方法、プログラム、ヘッドホンシステム |
JP2013110585A (ja) | 2011-11-21 | 2013-06-06 | Yamaha Corp | 音響機器 |
WO2013079993A1 (en) * | 2011-11-30 | 2013-06-06 | Nokia Corporation | Signal processing for audio scene rendering |
US20130177188A1 (en) * | 2012-01-06 | 2013-07-11 | Audiotoniq, Inc. | System and method for remote hearing aid adjustment and hearing testing by a hearing health professional |
US9204174B2 (en) * | 2012-06-25 | 2015-12-01 | Sonos, Inc. | Collecting and providing local playback system information |
US20140126740A1 (en) * | 2012-11-05 | 2014-05-08 | Joel Charles | Wireless Earpiece Device and Recording System |
US9391580B2 (en) * | 2012-12-31 | 2016-07-12 | Cellco Paternership | Ambient audio injection |
US9356567B2 (en) * | 2013-03-08 | 2016-05-31 | Invensense, Inc. | Integrated audio amplification circuit with multi-functional external terminals |
-
2013
- 2013-11-12 CN CN201710447232.5A patent/CN107172538B/zh active Active
- 2013-11-12 US US14/077,496 patent/US9497542B2/en active Active
- 2013-11-12 JP JP2013233694A patent/JP6090121B2/ja active Active
- 2013-11-12 WO PCT/JP2013/080587 patent/WO2014073704A1/ja active Application Filing
- 2013-11-12 JP JP2013233693A patent/JP2014116931A/ja active Pending
- 2013-11-12 EP EP13853867.3A patent/EP2882202B1/en active Active
- 2013-11-12 EP EP19177298.7A patent/EP3557880B1/en active Active
- 2013-11-12 EP EP21185333.8A patent/EP3917161B1/en active Active
- 2013-11-12 KR KR1020177002958A patent/KR20170017000A/ko not_active Application Discontinuation
- 2013-11-12 CN CN201310560237.0A patent/CN103813239B/zh active Active
- 2013-11-12 CA CA2832848A patent/CA2832848A1/en not_active Abandoned
- 2013-11-12 JP JP2013233692A patent/JP6090120B2/ja active Active
- 2013-11-12 AU AU2013342412A patent/AU2013342412B2/en active Active
- 2013-11-12 KR KR1020157001712A patent/KR101706133B1/ko active IP Right Grant
-
2016
- 2016-09-13 US US15/263,860 patent/US10250974B2/en active Active
-
2017
- 2017-02-09 JP JP2017021878A patent/JP6330936B2/ja active Active
- 2017-02-09 JP JP2017021872A patent/JP6299895B2/ja active Active
-
2019
- 2019-02-05 US US16/267,445 patent/US11190872B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0983988A (ja) * | 1995-09-11 | 1997-03-28 | Nec Eng Ltd | テレビ会議システム |
JPH10276415A (ja) | 1997-01-28 | 1998-10-13 | Casio Comput Co Ltd | テレビ電話装置 |
JP2002190870A (ja) * | 2000-12-20 | 2002-07-05 | Audio Technica Corp | 赤外線双方向通信システム |
JP2004242207A (ja) | 2003-02-07 | 2004-08-26 | Matsushita Electric Works Ltd | インターホンシステム |
JP2006140930A (ja) * | 2004-11-15 | 2006-06-01 | Sony Corp | マイクシステムおよびマイク装置 |
WO2006054778A1 (ja) * | 2004-11-17 | 2006-05-26 | Nec Corporation | 通信システム、通信端末装置、サーバ装置及びそれらに用いる通信方法並びにそのプログラム |
JP2008147823A (ja) * | 2006-12-07 | 2008-06-26 | Yamaha Corp | 音声会議装置、音声会議システムおよび放収音ユニット |
Non-Patent Citations (1)
Title |
---|
See also references of EP2882202A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220201125A1 (en) * | 2017-09-29 | 2022-06-23 | Dolby Laboratories Licensing Corporation | Howl detection in conference systems |
US11677879B2 (en) * | 2017-09-29 | 2023-06-13 | Dolby Laboratories Licensing Corporation | Howl detection in conference systems |
Also Published As
Publication number | Publication date |
---|---|
JP6299895B2 (ja) | 2018-03-28 |
JP6330936B2 (ja) | 2018-05-30 |
CN103813239B (zh) | 2017-07-11 |
CN103813239A (zh) | 2014-05-21 |
JP6090120B2 (ja) | 2017-03-08 |
EP2882202A1 (en) | 2015-06-10 |
KR20170017000A (ko) | 2017-02-14 |
CN107172538B (zh) | 2020-09-04 |
US11190872B2 (en) | 2021-11-30 |
EP3557880B1 (en) | 2021-09-22 |
AU2013342412B2 (en) | 2015-12-10 |
CA2832848A1 (en) | 2014-05-12 |
JP2014116931A (ja) | 2014-06-26 |
JP2014116932A (ja) | 2014-06-26 |
US20160381457A1 (en) | 2016-12-29 |
US9497542B2 (en) | 2016-11-15 |
AU2013342412A1 (en) | 2015-01-22 |
JP2014116930A (ja) | 2014-06-26 |
KR101706133B1 (ko) | 2017-02-13 |
EP2882202B1 (en) | 2019-07-17 |
US20140133666A1 (en) | 2014-05-15 |
EP3917161B1 (en) | 2024-01-31 |
US10250974B2 (en) | 2019-04-02 |
JP6090121B2 (ja) | 2017-03-08 |
EP3917161A1 (en) | 2021-12-01 |
EP3557880A1 (en) | 2019-10-23 |
JP2017139767A (ja) | 2017-08-10 |
EP2882202A4 (en) | 2016-03-16 |
JP2017108441A (ja) | 2017-06-15 |
CN107172538A (zh) | 2017-09-15 |
US20190174227A1 (en) | 2019-06-06 |
KR20150022013A (ko) | 2015-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6299895B2 (ja) | マイクユニット、ホスト装置、および信号処理システム | |
JP5003531B2 (ja) | 音声会議システム | |
JP4946090B2 (ja) | 収音放音一体型装置 | |
KR101248971B1 (ko) | 방향성 마이크 어레이를 이용한 신호 분리시스템 및 그 제공방법 | |
JP4701962B2 (ja) | 回帰音除去装置 | |
CN100525101C (zh) | 使用波束形成算法来记录信号的方法和设备 | |
KR20120101457A (ko) | 오디오 줌 | |
WO2005125272A1 (ja) | ハウリング抑圧装置、プログラム、集積回路、およびハウリング抑圧方法 | |
JP2010268129A (ja) | 電話装置、エコーキャンセラ及びエコーキャンセルプログラム | |
US9318096B2 (en) | Method and system for active noise cancellation based on remote noise measurement and supersonic transport | |
US6996240B1 (en) | Loudspeaker unit adapted to environment | |
KR20200007793A (ko) | 음성 출력 제어 장치, 음성 출력 제어 방법, 그리고 프로그램 | |
CN112509595A (zh) | 音频数据处理方法、系统及存储介质 | |
JP4655905B2 (ja) | 回帰音除去装置 | |
JP2010221945A (ja) | 信号処理方法、装置及びプログラム | |
KR20230057333A (ko) | 휴대용 노래방을 위한 저복잡도 하울링 억제 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13853867 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013853867 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2013342412 Country of ref document: AU Date of ref document: 20131112 Kind code of ref document: A Ref document number: 20157001712 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |