WO2017135350A1 - Recording medium, acoustic processing device, and acoustic processing method - Google Patents

Recording medium, acoustic processing device, and acoustic processing method Download PDF

Info

Publication number
WO2017135350A1
WO2017135350A1 PCT/JP2017/003715 JP2017003715W WO2017135350A1 WO 2017135350 A1 WO2017135350 A1 WO 2017135350A1 JP 2017003715 W JP2017003715 W JP 2017003715W WO 2017135350 A1 WO2017135350 A1 WO 2017135350A1
Authority
WO
WIPO (PCT)
Prior art keywords
acoustic signal
type
acoustic
musical instrument
specified
Prior art date
Application number
PCT/JP2017/003715
Other languages
French (fr)
Japanese (ja)
Inventor
賀文 水野
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2017135350A1 publication Critical patent/WO2017135350A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to a technique for processing an acoustic signal.
  • Patent Document 1 discloses a technique for processing an acoustic signal by a user specifying parameters for selecting an algorithm and parameters for selecting a waveform of an operator (FM synthesizer).
  • an object of the present invention is to appropriately adjust an acoustic signal without requiring specialized knowledge regarding adjustment contents and acoustic characteristics.
  • the recording medium of the present invention adjusts an acoustic signal according to a specified procedure for specifying the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and the specified type of musical instrument.
  • the program for executing the adjustment procedure is recorded.
  • An acoustic processing device includes a specifying unit that specifies a type of musical instrument corresponding to a performance sound represented by an acoustic signal, and an adjustment unit that adjusts the acoustic signal according to the type of instrument specified by the specifying unit It comprises.
  • the acoustic processing method identifies the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified type of musical instrument.
  • 1 is a configuration diagram of a sound processing apparatus according to a first embodiment of the present invention. It is explanatory drawing of a 1st process. It is a block diagram of a 1st process part. It is explanatory drawing of the process of a characteristic provision part. It is a flowchart of the process of the whole acoustic treatment apparatus. It is a block diagram of the sound processing apparatus which concerns on 2nd Embodiment of this invention. It is explanatory drawing of band information. It is explanatory drawing of priority information. It is a block diagram of a 2nd process part. It is explanatory drawing of a 2nd process. It is a display example of the parameter adjusted by the second process.
  • FIG. 1 is a configuration diagram of a sound processing apparatus 100 according to the first embodiment of the present invention.
  • a plurality (N) of signal supply devices 12_1 to 12_N and a sound emission device 14 are connected to the sound processing device 100 of the first embodiment.
  • the acoustic signal x_n is a time domain signal indicating the waveform of the performance sound of the musical instrument 10_n.
  • the signal supply device 12_n is a sound collection device that collects a performance sound of the musical instrument 10_n and generates an acoustic signal x_n. It is also possible to use, as the instrument 10_n, an electric musical instrument in which a pickup for detecting vibration of a sound source such as a string is incorporated as the signal supply device 12.
  • a playback device that acquires an acoustic signal x_n corresponding to the performance sound of the musical instrument 10_n from a portable or built-in recording medium and supplies the acoustic signal x_n to the acoustic processing device 100, or an acoustic signal corresponding to the performance sound of the musical instrument 10_n from a communication network
  • a communication device that receives the signal x_n and supplies the signal to the sound processing device 100 may be employed as the signal supply device 12.
  • the illustration of the A / D converter that converts the acoustic signal x_n from analog to digital is omitted for convenience.
  • the acoustic processing apparatus 100 includes N acoustic signals generated by adjusting each of N (N-channel) acoustic signals x_1 to x_N respectively supplied from N signal supply apparatuses 12_1 to 12_N.
  • An acoustic signal z is generated by mixing y_1 to y_N. That is, the sound processing apparatus 100 according to the first embodiment is an N-channel mixing system.
  • the sound emitting device 14 (for example, a speaker or headphones) emits sound waves according to the acoustic signal z generated by the sound processing device 100.
  • Illustration of a D / A converter that converts the acoustic signal z from digital to analog, an amplifier that amplifies the acoustic signal z, and the like is omitted for convenience.
  • the case where the acoustic signal z is supplied to the sound emitting device 14 and reproduced is illustrated, but the acoustic signal z is stored in the storage device 24 or reproduced separately from the acoustic processing device 100. It is also possible to transmit the acoustic signal z to a device or the like.
  • the sound processing device 100 is realized by a computer system including an arithmetic processing device 22 and a storage device 24.
  • the storage device 24 stores a program executed by the arithmetic processing device 22 and various data used by the arithmetic processing device 22.
  • a known recording medium such as a semiconductor recording medium and a magnetic recording medium or a combination of a plurality of types of recording media can be arbitrarily employed as the storage device 24.
  • a configuration in which the acoustic signal x_n is stored in the storage device 24 (therefore, the signal supply device 12_n is omitted) is also preferable.
  • the storage device 24 of the first embodiment stores a plurality of target characteristics R respectively corresponding to different instrument types.
  • the type of instrument is, for example, the name of the sound source (specifically, instrument name or performance part name).
  • instrument names such as guitar and bass drum are exemplified as the types of instruments.
  • the target characteristic R of the first embodiment is a frequency characteristic.
  • the target characteristic R corresponding to any one type of musical instrument is a suitable acoustic characteristic that is a target as the characteristic of the performance sound of that type of musical instrument.
  • the generation method of the target characteristic R is arbitrary.
  • the target characteristic R is generated by analyzing existing acoustic data recorded on a recording medium such as a music CD. It is also possible for a producer such as an acoustic engineer to manually generate the target characteristic R.
  • the arithmetic processing device 22 executes a program stored in the storage device 24, thereby providing a plurality of functions (specification unit 32 and adjustment unit 34) for generating the acoustic signal z from the N acoustic signals x_1 to x_N. Realize.
  • a configuration in which each function of the arithmetic processing device 22 is distributed to a plurality of devices, or a configuration in which a dedicated electronic circuit (for example, DSP) realizes a part of the functions of the arithmetic processing device 22 may be employed.
  • the specifying unit 32 analyzes each of the N acoustic signals x_1 to x_N supplied from the N signal supply devices 12_1 to 12_N, respectively, and thereby the musical instrument type A_n corresponding to the performance sound represented by each acoustic signal x_n. Is identified. Specifically, the specifying unit 32 compares the feature amount analyzed from the acoustic signal x_n with each of a plurality of reference feature amounts prepared in advance for different instrument types, and a plurality of reference features. The type of musical instrument corresponding to the characteristic amount similar to the characteristic amount of the acoustic signal x_n among the characteristic amounts is specified as the type A_n of the musical instrument 10_n.
  • MFCC Mel-Frequency ⁇ Cepstrum Coefficient
  • SVM Small Vector Vector Machine
  • the adjusting unit 34 generates the acoustic signal z by adjusting each acoustic signal x_n according to the instrument type A_n specified by the specifying unit 32.
  • the adjustment unit 34 of the first embodiment includes N first processing units 341_1 to 341_N corresponding to the N acoustic signals x_1 to x_N, respectively, and a mixing unit 343.
  • the first processing unit 341 ⁇ / b> _n performs first processing for bringing the frequency characteristic X_n (solid line) of the acoustic signal x_n closer to the target characteristic R (dotted line) corresponding to the instrument type A_n specified by the specifying unit 32. Is performed on the acoustic signal x_n.
  • FIG. 3 is a configuration diagram of an arbitrary first processing unit 341_n.
  • the first processing unit 341_n includes a frequency analysis unit 40, a characteristic imparting unit 50, and a waveform generation unit 60, and generates the acoustic signal y_n by executing a first process on the acoustic signal x_n. That is, the acoustic signal y_n has an acoustic characteristic close to the target characteristic R corresponding to the musical instrument type A_n.
  • the frequency analysis unit 40 in FIG. 3 analyzes the acoustic signal x_n supplied from the signal supply device 12_n, thereby sequentially obtaining the frequency characteristics (frequency spectrum) X_n of the acoustic signal x_n for each unit section (frame) on the time axis. To generate.
  • a known frequency analysis such as a short-time Fourier transform can be arbitrarily employed. Note that it is also possible to use a series (filter bank) of a plurality of bandpass filters having different passbands as the frequency analysis unit 40.
  • the characteristic imparting unit 50 brings the frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R corresponding to the instrument type A_n identified by the identifying unit 32, so that the frequency characteristic (frequency spectrum) Y_n of the acoustic signal y_n is a unit interval. It generates sequentially every time.
  • FIG. 4 is an explanatory diagram of the processing of the characteristic assigning unit 50.
  • the characteristic assigning unit 50 executes, as a first process, an equalizing process that brings the frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R using a plurality of bandpass filters having different characteristics.
  • the characteristics of each bandpass filter are defined by parameters such as a frequency point (for example, center frequency), a gain, and a Q value.
  • the characteristic assigning unit 50 calculates each parameter of the plurality of bandpass filters so that the frequency characteristic X_n approaches the target characteristic R, and then applies each bandpass filter to the frequency characteristic X_n so that the frequency characteristic of the acoustic signal y_n is obtained. Y_n is generated.
  • the total number of band pass filters is arbitrary. If the total number is increased, the ability to adjust the acoustic signal x_n is improved, but the processing load of the adjustment unit 34 (characteristic imparting unit 50) and, consequently, the processing load of the entire acoustic processing apparatus 100 increase. I am trying. It is also possible to variably control the total number of band pass filters.
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
  • FIG. 5 is a flowchart of processing of the entire sound processing apparatus 100.
  • the process in FIG. 5 is started when the performer starts playing a musical instrument (for example, a bass drum).
  • the first processing unit 341_n acquires the acoustic signal x_n generated by collecting the performance sound of the bass drum (SA1).
  • the identifying unit 32 identifies, for each of the N acoustic signals x_1 to x_N, the musical instrument type A_n corresponding to the performance sound represented by the acoustic signal x_n (SA2).
  • the type “bass drum” is specified by the analysis of the acoustic signal x_2 generated by the musical instrument 10_2, and the type “base” is specified by the analysis of the acoustic signal x_3 generated by the musical instrument 10_3.
  • the first processing unit 341_n (frequency analysis unit 40) sequentially generates a frequency characteristic (frequency spectrum) X_n of each acoustic signal x_n for each unit section (frame) on the time axis by analyzing each acoustic signal x_n ( SA3).
  • the first processing unit 341_n (characteristic imparting unit 50) executes a first process that brings each frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R corresponding to the instrument type A_n (bass drum) identified by the identifying unit 32.
  • the frequency characteristic (frequency spectrum) Y_n of each acoustic signal y_n is sequentially generated for each unit section (SA4).
  • the first processing unit 341_n (waveform generating unit 60) generates a time domain acoustic signal y_n from each frequency characteristic Y_n generated by the characteristic providing unit 50 (SA5).
  • an acoustic signal y_n having a frequency characteristic Y_n close to the frequency characteristic represented by the target characteristic R of the instrument type A_n (bass drum) is generated.
  • the adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the acoustic signals y_n generated by the first processing units 341_n (SA6).
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7).
  • the acoustic signal x_n is adjusted according to the musical instrument type A_n specified by the specifying unit 32. Therefore, it is possible to appropriately adjust the acoustic signal x_n without requiring specialized knowledge regarding the adjustment content and the acoustic characteristics as compared with the configuration in which the user specifies the adjustment content for the acoustic signal x_n.
  • the characteristics of each bandpass filter are controlled so that the frequency characteristic X_n of the acoustic signal x_n approaches the target characteristic R (according to the difference between the two).
  • the acoustic signal x_n when the acoustic signal x_n is already close to the target characteristic R as compared with the configuration in which the characteristics of the filter that processes the acoustic signal x_n are fixed (configuration that does not depend on the frequency characteristic X_n of the acoustic signal x_n), the acoustic signal x_n The degree of adjustment is reduced. That is, it is possible to bring the acoustic signal x_n closer to the target characteristic R while making use of the frequency characteristic X_n of the original acoustic signal x_n.
  • Second Embodiment A second embodiment of the present invention will be described.
  • symbol used by description of 1st Embodiment is diverted, and each detailed description is abbreviate
  • FIG. 6 is a configuration diagram of the sound processing apparatus 100 according to the second embodiment.
  • the adjustment unit 34 of the second embodiment has a configuration in which a second processing unit 345 is added to a plurality (N) of first processing units 341_1 to 341_N and a mixing unit 343 similar to those of the first embodiment.
  • the first processing unit 341_n generates the acoustic signal y_n through the first processing on the acoustic signal x_n.
  • the second processing unit 345 generates an acoustic signal w_n from the acoustic signal y_n processed by each first processing unit 341_n.
  • the mixing unit 343 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N processed by the second processing unit 345.
  • the storage device 24 of the second embodiment stores target characteristics R, band information T, and priority information P1.
  • the target characteristic R is the same as that in the first embodiment.
  • FIG. 7 is an explanatory diagram of the band information T.
  • the band information T designates a frequency band (hereinafter referred to as “sound generation band”) of an acoustic component that is predominantly included in the performance sound of the musical instrument of type A_n. Specifically, as illustrated in FIG. 7, the band information T specifies any one of “low band”, “middle band”, and “high band” as the sound generation band. For example, when the instrument type A_n is “bass”, the sounding band corresponding to “bass” is “low range”.
  • FIG. 8 is an explanatory diagram of the priority information P1.
  • the bandwidth and total number of each of the bands B1 to BM are arbitrary.
  • a plurality of bands B1 to BM can be made into three bands B1 to B3, and each band B1 to B3 can correspond to each of the sound generation bands “low range”, “middle range”, and “high range” indicated by the band information T. is there.
  • the second processing unit 345 includes an acoustic signal y_n1 (example of the first acoustic signal) and an acoustic signal y_n2 in which the sound generation bands corresponding to the instrument type A_n identified by the identifying unit 32 among the N acoustic signals y_1 to y_N.
  • FIG. 9 is a configuration diagram of the second processing unit 345.
  • the second processing unit 345 includes a frequency analysis unit 70, a suppression unit 80, and a waveform generation unit 90.
  • the frequency analysis unit 70 analyzes the acoustic signal y_n generated by each first processing unit 341_n, so that the frequency analysis unit 40 generates the frequency characteristic Y_n of the acoustic signal y_n in the same manner as the frequency analysis unit 40 generates the frequency characteristic X_n. Generated sequentially for each upper unit section.
  • the frequency characteristic Y_n generated by the characteristic applying unit 50 of each first processing unit 341_n is supplied to the suppression unit 80 (therefore, the frequency analysis of the waveform generating unit 60 and the second processing unit 345 of each first processing unit 341_n). (It is omitted from the section 70).
  • the suppressing unit 80 By performing the second process on y_n1 or the acoustic signal y_n2, the frequency characteristic W_n1 of the acoustic signal w_n1 or the frequency characteristic W_n2 of the acoustic signal w_n2 is sequentially generated for each unit section.
  • the frequency characteristic W_n1 of the acoustic signal w_n1 is generated by executing the second process on the acoustic signal y_n1 that is one of the acoustic signal y_n1 and the acoustic signal y_n2 whose sound generation bands overlap each other.
  • the configuration is illustrated.
  • the acoustic signal y_n2 and the acoustic signals y_n of the musical instruments whose sound generation bands do not overlap with other musical instruments are supplied as they are as the acoustic signals w_n from the second processing unit 345 to the mixing unit 343.
  • the suppression unit 80 selects the acoustic component corresponding to the instrument type with the lower priority among the acoustic signals y_n1 and y_n2 with the sound generation bands overlapping each other, and the acoustic component corresponding to the instrument type with the higher priority.
  • the second process for suppressing relative to the component is performed for each of the bands B1 to BM with respect to the frequency characteristic Y_n1 of the acoustic signal y_n1.
  • a known technique can be arbitrarily employed for the process of adjusting the frequency characteristic Y_n1 of the acoustic signal y_n1.
  • the characteristics of each bandpass filter are defined by parameters such as a frequency point (for example, center frequency), a gain, and a Q value.
  • the suppression unit 80 calculates each parameter of the plurality of bandpass filters so that the frequency characteristic Y_n1 is suppressed, and generates the frequency characteristic W_n1 of the acoustic signal w_n by applying each bandpass filter to the frequency characteristic Y_n1. To do.
  • the frequency characteristic W_n1 generated in the second process for the frequency characteristic Y_n1 of the acoustic signal y_n1 whose sound generation band overlaps with another musical instrument is output to the waveform generation unit 90, and the frequency characteristic Y_n of the other acoustic signal y_n is the frequency. It is output to the waveform generator 90 as the characteristic W_n.
  • the waveform generation unit 90 generates a time domain acoustic signal w_n from the frequency characteristic W_n generated by the suppression unit 80 for each unit section. Short-time inverse Fourier transform is preferably used for generating the acoustic signal w_n.
  • FIG. 10 is an explanatory diagram of the second process.
  • the suppression unit 80 sets the priority information P1.
  • the second process is performed with reference to. As illustrated in FIG. 10, the second process suppresses or emphasizes the acoustic components of the bands B1 to BM of the acoustic signal y_n1 according to the level of the priority of the acoustic signal y_n1 and the priority of the acoustic signal y_n2. It is processing.
  • the acoustic component of the acoustic signal y_n1 is suppressed by adjusting the frequency characteristic Y_n1.
  • the acoustic component of the acoustic signal y_n1 is emphasized by adjusting the frequency characteristic Y_n1. That is, the frequency characteristic Y_n2 is suppressed with respect to the frequency characteristic Y_n1.
  • relative suppression of an acoustic component includes a case where one acoustic component is suppressed with respect to the other acoustic component, and a case where the other acoustic component is emphasized with respect to one acoustic component. Includes both.
  • the frequency characteristic in one of the frequency characteristics Y_n2 corresponding to the instrument type A_n2 is suppressed relatively to the other.
  • the peak of the frequency characteristic Y_n1 and the peak of the frequency characteristic Y_n2 are close to each other on the frequency axis before the execution of the second process. However, as a result of the second process, both peaks are mutually on the frequency axis. Move to a position shifted to.
  • the gain corresponding to each band B1 to BM is adjusted according to the priority information P1, and as illustrated in FIG.
  • the gain corresponding to each band B1 to BM of the signal y_n1 can be displayed on the display device (not shown) of the sound processing device 100 so that the user can visually grasp the gain.
  • the arithmetic processing unit 22 executes the program stored in the storage device 24, the parameters (B1 to BM) of the respective bands B1 to BM adjusted according to the priority information P1 by the second processing ( For example, a function of displaying a gain or a Q value on a display device is realized.
  • the mixing unit 343 in FIG. 6 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the second processing unit 345 (the suppressing unit 80). That is, an acoustic signal z representing a mixed sound of performance sounds of N different types of N musical instruments 10_1 to 10_N is generated.
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
  • FIG. 12 is a flowchart of processing of the entire sound processing apparatus 100 according to the second embodiment.
  • the process shown in FIG. 12 is started when each performer starts playing a musical instrument (for example, bass drum and bass).
  • the process from the process (SA1) for acquiring the acoustic signal x_n by the first processing unit 341_n to the process (SA5) for generating the acoustic signal y_n is the same as in the first embodiment.
  • the type “bass drum” is specified from the analysis of the acoustic signal x_2 generated by the instrument 10_2, and “bass” is determined from the analysis of the acoustic signal x_3 generated by the instrument 10_3. Is specified. Note that the sound generation band of the musical instrument “bass drum” indicated by the type A_2 and the sound generation band of the musical instrument “bass” indicated by the type A_3 are both “low frequency”, and the other (N ⁇ 2) types A_na (na The instruments indicated by ⁇ 2, 3) do not include instruments with the same tone band.
  • the second processing unit 345 (frequency analysis unit 70) sequentially generates the frequency characteristic Y_n of each acoustic signal y_n generated by the first processing unit 341_n for each unit section (SB1).
  • the second processing unit 345 (suppression unit 80) performs the second process with reference to the priority information P1 for the frequency characteristic Y_2 corresponding to one of the types A_2 and A_3 of the musical instruments having the same sound generation band.
  • the frequency characteristic W_2 is sequentially generated for each unit section, and the frequency characteristic Y_3 and the frequency characteristic Y_n of the acoustic signal y_n of the musical instrument whose sound generation band does not overlap with other musical instruments are output as the frequency characteristic W_n as it is ( SB2).
  • the second processing unit 345 (waveform generation unit 90) generates a time domain acoustic signal w_n from the frequency characteristic W_n generated by the suppression unit 80 (SB3).
  • the adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the second processing unit 345 (SA6).
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7). Note that for the acoustic signal y_na corresponding to the type A_na indicating a musical instrument other than the musical instruments (bass drum and bass) having the same sound generation band, the processing of steps SB1 to SB3 can be omitted and the acoustic signal w_na can be supplied as it is. (SA6).
  • the same effect as in the first embodiment can be obtained.
  • the frequency band corresponding to the type of instrument specified by the specifying unit 32 overlaps between the acoustic signal y_n1 and the acoustic signal y_n2
  • the frequency band in one of the acoustic signal y_n1 and the acoustic signal y_n2 The acoustic signal w_n1 is generated so that the acoustic component is suppressed relative to the other. Therefore, there is an advantage that the performance sound corresponding to the other acoustic component can be easily heard with respect to the performance sound corresponding to the relatively suppressed acoustic component.
  • the second process is performed with reference to the priority information P1, it is possible to appropriately adjust the acoustic signal y_n1, that is, generate the acoustic signal w_n1.
  • FIG. 13 is a configuration diagram of the sound processing apparatus 100 according to the second embodiment.
  • the adjustment unit 34 of the third embodiment has a configuration in which a third processing unit 347 is added to a plurality (N) of first processing units 341_1 to 341_N and a mixing unit 343 similar to those of the first embodiment.
  • the first processing unit 341_n generates the acoustic signal y_n through the first processing on the acoustic signal x_n.
  • the third processing unit 347 generates an acoustic signal v_n from the acoustic signal y_n processed by each first processing unit 341_n.
  • the mixing unit 343 generates the acoustic signal z by mixing the N acoustic signals v_1 to v_N processed by the third processing unit 347.
  • the storage device 24 of the third embodiment stores target characteristics R, band information T, and priority information P2.
  • the target characteristic R is the same as in the first embodiment
  • the band information T is the same as in the second embodiment.
  • FIG. 14 is an explanatory diagram of the priority information P2.
  • the priority information P ⁇ b> 2 is for each instrument type A_n for each of a plurality of sound generation bands (specifically, “low range”, “mid range”, “high range”) indicated by the band information T. Specify the priority.
  • the priority information P2 specifies the priority with an integer, for example, as in the case of the priority information P1 of the second embodiment.
  • the third processing unit 347 overlaps the sound generation bands corresponding to the instrument type A_n specified by the specifying unit 32 among the N sound signals y_1 to y_N (for example, both are “low range”).
  • the first acoustic signal) and the acoustic signal y_n2 (second acoustic signal) (n1 ⁇ n2)
  • one of the acoustic signal y_n1 and the acoustic signal y_n2 has a relative acoustic component that overlaps the other on the time axis.
  • the acoustic signal v_n1 or the acoustic signal v_n2 is generated by executing the third process to be suppressed.
  • the acoustic signal y_n1 and the acoustic signal y_n2 correspond to different channels.
  • the third processing unit 347 performs a third process for suppressing the acoustic signal corresponding to the instrument type having a low priority relative to the acoustic signal corresponding to the instrument type having a high priority. Is performed with reference to the priority information P2 with respect to the acoustic signal y_n1 or the acoustic signal y_n2. In the third embodiment, among the acoustic signals y_n1 and y_n2, an acoustic signal having a low priority indicated by the priority information P2 is suppressed as a target of the third process.
  • the priority of the sound signal y_n is the priority shown in FIG. Reference is made to the priorities represented by the types A_n1 and A_n2 in the sound generation band “low range” of the information P2.
  • generates acoustic signal v_n1 by performing a 3rd process with respect to acoustic signal y_n1 with a low priority is illustrated.
  • the acoustic signal y_n2 and the acoustic signals y_n of the musical instruments whose sound generation bands do not overlap with other musical instruments are supplied as they are to the mixing unit 343 as the acoustic signals v_n.
  • FIG. 15 is an explanatory diagram of the third process.
  • the third process is a process of relatively suppressing the acoustic signal y_n1 that overlaps the other of the acoustic signal y_n1 and the acoustic signal y_n2 on the time axis.
  • the time waveform peak of the acoustic signal y_n1 and the peak of the time waveform of the acoustic signal y_n2 are time axis as illustrated in FIG. This is a case where they overlap each other.
  • the case where the peaks overlap includes, for example, a case where both peaks are coincident on the time axis and a case where both peaks are located within a predetermined period on the time axis.
  • the peaks of the time waveforms overlap each other, the performance sound represented by the acoustic signal y_n1 and the performance sound represented by the acoustic signal y_n2 are pronounced at the same time, and it tends to be difficult for the listener to hear both performance sounds. .
  • the performance sound represented by the acoustic signal y_n1 and the performance sound represented by the acoustic signal y_n2 are simultaneously pronounced, for example, the performance sound of a stringed instrument that continuously plays over the music (that is, the sound generation period is long), and the music Assume that the performance sound of a percussion instrument that plays completely at a specific point in the middle (that is, the sound generation period is short) is simultaneously sounding.
  • the third process is the same as the sound generation band of the type A_n1 corresponding to the sound signal y_n1 and the sound generation band of the type A_n2 corresponding to the sound signal y_n2, and the time waveform of the sound signal y_n1. And the peak of the time waveform of the acoustic signal y_n2 overlap each other on the time axis.
  • a well-known technique can be arbitrarily employed for the third process for suppressing the acoustic signal y_n1.
  • a compressor process that compresses the signal level of the acoustic signal y_n1 is a good example of the third process. As illustrated in FIG. 15, by compressing a portion where the signal level of the acoustic signal y_n1 exceeds the threshold Z, the acoustic component of the acoustic signal y_n1 is relatively suppressed with respect to the acoustic signal y_n2.
  • the compression rate (ratio) with respect to the signal level of the acoustic signal y_n1 in the third processing is arbitrary, for example, a configuration that compresses the signal level to the threshold Z (that is, compression rate ⁇ : 1), or a type corresponding to the acoustic signal y_n1 A configuration in which the compression rate is set according to A_n1 may be employed.
  • the threshold value Z is selected experimentally or statistically. Note that in a section where the signal level of the acoustic signal y_n1 is lower than the threshold Z, the acoustic signal y_n1 is supplied to the mixing unit 343 as the acoustic signal v_n1.
  • a configuration in which the third process is performed on the acoustic signal y_n1 when the signal level of the acoustic signal y_n1 exceeds the signal level of the acoustic signal y_n2 can be suitably employed.
  • the third processing unit 347 generates the acoustic signal v_n1 by compressing the acoustic component exceeding the threshold in the acoustic signal y_n1 as illustrated in FIG. 15 by performing compressor processing (third processing) on the acoustic signal y_n1. To do. As illustrated in FIG.
  • the threshold value (Threshold) Z and the compression ratio (Ratio) used in the compressor processing for the acoustic signal y_n1 are visually displayed on the display device (not shown) of the acoustic processing device 100 by the user. It is also possible to display it so that it can be grasped.
  • FIG. 16 in addition to the numerical values of the threshold value Z and the compression rate, a graph showing the relationship between before and after the compressor processing (input / output) is illustrated.
  • the arithmetic processing device 22 executes the program stored in the storage device 24, thereby causing the display device to display parameters (for example, the threshold value Z and the compression rate) applied to the third processing. Function is realized.
  • the mixing unit 343 in FIG. 13 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the third processing unit 347. That is, an acoustic signal z representing a mixed sound of performance sounds of N different types of N musical instruments 10_1 to 10_N is generated.
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
  • FIG. 17 is a flowchart of processing of the entire sound processing apparatus 100 according to the third embodiment.
  • the process shown in FIG. 17 is started when each player performs a musical instrument (for example, bass drum and bass).
  • the process from the process (SA1) for acquiring the acoustic signal x_n by the first processing unit 341_n to the process (SA5) for generating the acoustic signal y_n is the same as in the first embodiment.
  • SA2 for specifying the type A_n of the instrument for example, the type “bass drum” is specified from the analysis of the acoustic signal x_2 generated by the instrument 10_2, and “bass” is determined from the analysis of the acoustic signal x_3 generated by the instrument 10_3.
  • the third processing unit 347 generates the acoustic signal v_2 by performing the third process on the acoustic signal y_2 corresponding to one of the types A_2 and A_3 of the musical instruments having the same sound generation band, and generates the acoustic signal v_2.
  • the signal y_3 and the sound signal y_na of the musical instrument whose sound generation band does not overlap with other musical instruments are directly output as the sound signal v_na (SC1).
  • the adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the third processing unit 347 (SA6).
  • the sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7).
  • the same effect as in the first embodiment can be obtained.
  • the frequency band corresponding to the instrument type A_n specified by the specifying unit 32 overlaps between the acoustic signal y_n1 and the acoustic signal y_n2
  • the acoustic signal y_n1 that overlaps the acoustic signal y_n2 on the time axis. Is relatively suppressed. Therefore, there is an advantage that the performance sound corresponding to the acoustic signal y_n2 including the other acoustic component can be easily heard with respect to the performance sound corresponding to the acoustic signal y_n1 including the relatively suppressed acoustic component. Further, since the third process is performed with reference to the priority information P2, it is possible to appropriately adjust the acoustic signal y_n1, that is, generate the acoustic signal v_n1.
  • the musical instrument type A_n is identified by analyzing the acoustic signal x_n corresponding to the musical instrument 10_n, but the identifying method of the musical instrument type A_n is not limited to the analysis of the acoustic signal x_n.
  • the specifying unit 32 detects an operation (for example, an operation for selecting a musical instrument from a plurality of candidates) in which the user uses the input device to specify a musical instrument, and specifies the type A_n of the musical instrument instructed by the operation.
  • the specifying unit 32 is an operating device capable of instructing the sound processing apparatus 100 to specify the musical instrument type A_n by a user operation.
  • the burden on the user is reduced as compared with the configuration in which the instrument type A_n is directly specified by a user instruction. Is possible. Further, there is an advantage that the instrument type A_n can be specified even when the user does not recognize the instrument type corresponding to the performance sound.
  • the first process, the second process, and the third process are exemplified as the process according to the instrument type A_n specified by the specifying unit 32.
  • the adjustment unit 34 adjusts the acoustic signal. Is not limited to the above examples. In other words, the content of the process is arbitrary as long as it is a process of adjusting the acoustic signal corresponding to the instrument type A_n specified by the specifying unit 32. Therefore, for example, a configuration in which the adjustment unit 34 performs both the second processing and the third processing after the first processing, or a configuration in which each of the second processing and the third processing is performed independently may be employed. .
  • the configuration in which the instrument name is the type of the instrument is illustrated, but the instrument type is not limited to the instrument name.
  • the type of musical instrument is arbitrary as long as it is information indicating what kind of sound the performance sound represented by the acoustic signal is, for example, the presence or absence of harmonicity (whether there is a harmonic structure in the performance sound) Information) indicating the sound generation band (any one of “low range”, “middle range”, and “high range”) may be used as the instrument type.
  • the reliability of the musical instrument type A_n is added to the adjustment of the acoustic signal x_n. It is also possible.
  • the reliability is the accuracy of the instrument type A_n specified by the specifying unit 32 (accuracy of the identification result). For example, the similarity (for example, distance or correlation) with the feature amount of the acoustic signal x_n is calculated for each of a plurality of feature amounts prepared in advance for different instrument types, and the similarity is high (distance is small or correlated).
  • the specifying unit 32 sets the reliability of the identification result according to the similarity. For example, a configuration using the similarity as the reliability, or a configuration for calculating the reliability by a predetermined calculation using the similarity is illustrated.
  • the adjusting unit 34 controls the degree of adjustment of the acoustic signal x_n according to the reliability of the instrument type A_n specified by the specifying unit 32.
  • the adjustment unit 34 may be configured such that the higher the reliability of the type A_n specified for the acoustic signal x_n, the higher the degree of adjustment for the acoustic signal x_n (or the lower the degree of adjustment, the lower the degree of adjustment).
  • the degree of adjustment of the acoustic signal x_n is controlled. In the above configuration, the adjustment of the acoustic signal x_n is suppressed according to the reliability of the result of specifying the instrument type A_n.
  • the reliability of the musical instrument type A_n when the reliability of the musical instrument type A_n is high, priority is given to the addition of acoustic characteristics suitable for the musical instrument, while the reliability of the musical instrument type A_n is low (the identification result of the type A_n is erroneous). If the possibility is assumed), the acoustic characteristic prepared for the musical instrument is not necessarily valid for the acoustic signal x_n, so that the addition of the acoustic characteristic is suppressed. Appropriate adjustment according to the specific result can be realized. Similarly, in the second embodiment and the third embodiment, reliability can be added to the adjustment of the acoustic signal y_n in addition to the acoustic signal x_n.
  • the second process is performed on the acoustic signal y_n1 which is one of the acoustic signal y_n1 and the acoustic signal y_n2 whose sound generation bands overlap each other. It is not limited to y_n1.
  • the second process may be performed on the acoustic signal y_n2, or the second process may be performed on both the acoustic signal y_n1 and the acoustic signal y_n2. It is also possible to change the acoustic signal y_n for performing the second process for each of the bands B1 to B3.
  • the processing is to suppress the acoustic component corresponding to the instrument type with low priority relative to the acoustic component corresponding to the instrument type with high priority
  • the target of two processes is arbitrary.
  • the target of the third process is arbitrary.
  • the acoustic component A in the process of suppressing one acoustic component A of the acoustic signal y_n1 and the acoustic signal y_n2 relative to the other acoustic component B, the acoustic component A is used as the acoustic component.
  • processing for enhancing the acoustic component B with respect to the acoustic component A is also included.
  • the suppressing unit 80 performs the second process on one or more of the three acoustic signals y_n (acoustic signal y_n1, acoustic signal y_n2, and acoustic signal y_n3).
  • the suppression unit 80 makes relative the acoustic components corresponding to the other two musical instrument types relative to the acoustic component corresponding to the musical instrument type having the highest priority, for example, among the acoustic signal y_n1, the acoustic signal y_n2, and the acoustic signal y_n3.
  • the second process is performed to suppress the failure.
  • the sound processing device 100 exemplified in each of the above-described embodiments is suitably realized by the cooperation of the arithmetic processing device 22 and the program as described above.
  • the program according to a preferred aspect of the present invention causes the computer to specify a specific procedure for specifying the musical instrument type A [n] corresponding to the performance sound represented by the acoustic signal, and the specified musical instrument type A [n. ], The adjustment procedure for adjusting the acoustic signal is executed.
  • This program can be provided in a form stored in a computer-readable recording medium and installed in the computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included.
  • the non-transitory recording medium includes an arbitrary recording medium excluding a transient propagation signal (transitory, propagating signal) and does not exclude a volatile recording medium. It is also possible to provide a program to a computer in the form of distribution via a communication network. Further, the program exemplified above can be provided in the form of distribution via a communication network and installed in a computer.
  • the present invention is also specified as an operation method (acoustic processing method) of the acoustic processing device 100 according to each of the above-described embodiments.
  • the acoustic processing method according to a preferred aspect of the present invention specifies the musical instrument type A_n corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified musical instrument type A_n.
  • a recording medium is a computer that adjusts an acoustic signal in accordance with a specific procedure for identifying the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and the identified musical instrument type.
  • the program for executing the adjustment procedure is recorded.
  • the acoustic signal is adjusted according to the specified type of musical instrument. Therefore, it is possible to appropriately adjust the acoustic signal without requiring specialized knowledge regarding the adjustment content and the acoustic characteristics, as compared with the method in which the user instructs the adjustment content for the acoustic signal.
  • Aspect 2 In a preferred example (Aspect 2) of Aspect 1, in the adjustment procedure, the first process of bringing the frequency characteristic of the acoustic signal close to the target characteristic corresponding to the specified type of musical instrument is performed on the acoustic signal.
  • the characteristics of each bandpass filter are controlled so that the frequency characteristics of the acoustic signal approach the target characteristics (in accordance with the difference between the two). Therefore, when the acoustic signal is already close to the target characteristic when the acoustic signal is already close to the target characteristic with a method in which the characteristics of the filter for processing the acoustic signal are fixed (method that does not depend on the frequency characteristics of the acoustic signal), the degree of adjustment is small. Become. In other words, it is possible to bring the acoustic signal closer to the target characteristic while making use of the original frequency characteristic of the acoustic signal.
  • Aspect 3 In the preferred example (Aspect 3) of Aspect 1 or Aspect 2, in the specifying procedure, the type of the musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels, and in the adjustment procedure, it is specified.
  • the frequency band corresponding to the type of musical instrument overlaps between the first acoustic signal and the second acoustic signal, the acoustic component of the frequency band in one of the first acoustic signal and the second acoustic signal is relative to the other.
  • the 2nd process to suppress automatically is performed.
  • Aspect 4 In the preferred example (Aspect 4) of Aspect 1 or Aspect 2, in the specifying procedure, the type of the musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels, and in the adjusting procedure, the type is specified.
  • the frequency band corresponding to the type of musical instrument overlaps between the first acoustic signal and the second acoustic signal, the relative acoustic component that overlaps the other on the time axis in one of the first acoustic signal and the second acoustic signal.
  • the adjustment procedure refers to priority information that designates a priority for each instrument type, and identifies the specified musical instrument out of the first acoustic signal and the second acoustic signal.
  • the acoustic component having the lower priority specified by the priority information with respect to the type is suppressed relative to the other.
  • the lower priority specified by the priority information for the specified musical instrument type is suppressed relative to the other.
  • the acoustic signal is adjusted more appropriately as compared with the configuration in which the acoustic component in one of the first acoustic signal and the second acoustic signal is relatively suppressed without referring to the priority information. It is possible.
  • the priority information specifies a priority for each of a plurality of bands on the frequency axis.
  • the priority information specifies the priority for each of a plurality of bands on the frequency axis. Therefore, the acoustic signal can be adjusted more precisely.
  • ⁇ Aspect 8> In a preferred example (Aspect 8) of Aspect 1, in the specifying procedure, the type of musical instrument is specified by analyzing an acoustic signal.
  • the type of musical instrument is specified by analyzing the acoustic signal, it is possible to reduce the burden on the user as compared with a configuration in which the type of musical instrument is specified according to an instruction from the user, for example. It is. Further, there is an advantage that the type of musical instrument can be specified even when the user does not recognize the type of musical instrument corresponding to the performance sound.
  • ⁇ Aspect 9> In a preferred example (aspect 9) of aspect 8, in the adjustment procedure, the adjustment of the acoustic signal is controlled according to the reliability of the result of specifying the type of instrument.
  • the adjustment of the acoustic signal is suppressed according to the reliability of the result of specifying the type of musical instrument. Therefore, for example, when the reliability of a musical instrument type is high, priority is given to the addition of acoustic characteristics suitable for the musical instrument, while when the reliability of the musical instrument type is low (the instrument type identification result may be erroneous). If the sound characteristics prepared for the instrument are not necessarily valid for the sound signal, the addition of the sound characteristics is suppressed. Appropriate adjustment can be realized.
  • ⁇ Aspect 10> in the adjustment procedure, the degree of adjustment is controlled so that the degree of adjustment increases as the reliability increases. In the above configuration, the degree of adjustment is controlled so that the degree of adjustment increases as the reliability increases (so that the degree of adjustment decreases as the reliability decreases).
  • a sound processing apparatus specifies an acoustic signal corresponding to the type of instrument identified by the identifying unit and the identifying unit that identifies the type of instrument corresponding to the performance sound represented by the acoustic signal. And an adjusting unit for adjusting.
  • the acoustic processing method according to a preferred aspect (aspect 12) of the present invention identifies the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified musical instrument type.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Auxiliary Devices For Music (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

Provided is a computer-readable recording medium having recorded thereon a program for causing a computer to execute: an identifying procedure for identifying a kind of musical instrument corresponding to sound to be played, represented by an acoustic signal; and an adjusting procedure for adjusting the acoustic signal in accordance with the identified kind of musical instrument.

Description

記録媒体、音響処理装置および音響処理方法Recording medium, sound processing apparatus, and sound processing method
 本発明は、音響信号を処理する技術に関する。 The present invention relates to a technique for processing an acoustic signal.
 音響信号を処理する各種の技術が従来から提案されている。例えば特許文献1には、アルゴリズムを選択するパラメータおよびオペレータ(FM合成器)の波形を選択するパラメータ等を利用者が指定することで音響信号を処理する技術が開示されている。 Various techniques for processing acoustic signals have been proposed. For example, Patent Document 1 discloses a technique for processing an acoustic signal by a user specifying parameters for selecting an algorithm and parameters for selecting a waveform of an operator (FM synthesizer).
特開2011-197429号公報JP 2011-197429 A
 しかし、特許文献1の技術では、音響信号を処理するためのパラメータを指定するにあたり、音響信号の調整内容および音響特性に関する専門的な知識ならびに経験が利用者に必要であり、専門的な知識および経験がない利用者にとっては音響信号を処理するためのパラメータを適切に処理することが困難であった。以上の事情を考慮して、本発明は、調整内容および音響特性に関する専門的な知識を必要とせずに、音響信号を適切に調整することを目的とする。 However, in the technique of Patent Document 1, in order to specify parameters for processing an acoustic signal, the user needs specialized knowledge and experience regarding adjustment contents and acoustic characteristics of the acoustic signal. For users who have no experience, it is difficult to appropriately process parameters for processing acoustic signals. In view of the above circumstances, an object of the present invention is to appropriately adjust an acoustic signal without requiring specialized knowledge regarding adjustment contents and acoustic characteristics.
 以上の課題を解決するために、本発明の記録媒体は、コンピュータに、音響信号が表す演奏音に対応する楽器の種類を特定する特定手順と、特定した楽器の種類に応じて音響信号を調整する調整手順とを実行させるためのプログラムを記録している。 In order to solve the above problems, the recording medium of the present invention adjusts an acoustic signal according to a specified procedure for specifying the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and the specified type of musical instrument. The program for executing the adjustment procedure is recorded.
 本発明の好適な態様に係る音響処理装置は、音響信号が表す演奏音に対応する楽器の種類を特定する特定部と、特定部が特定した楽器の種類に応じて音響信号を調整する調整部とを具備する。 An acoustic processing device according to a preferred aspect of the present invention includes a specifying unit that specifies a type of musical instrument corresponding to a performance sound represented by an acoustic signal, and an adjustment unit that adjusts the acoustic signal according to the type of instrument specified by the specifying unit It comprises.
 本発明の好適な態様に係る音響処理方法は、音響信号が表す演奏音に対応する楽器の種類を特定し、特定した楽器の種類に応じて音響信号を調整する。 The acoustic processing method according to a preferred aspect of the present invention identifies the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified type of musical instrument.
本発明の第1実施形態に係る音響処理装置の構成図である。1 is a configuration diagram of a sound processing apparatus according to a first embodiment of the present invention. 第1処理の説明図である。It is explanatory drawing of a 1st process. 第1処理部の構成図である。It is a block diagram of a 1st process part. 特性付与部の処理の説明図である。It is explanatory drawing of the process of a characteristic provision part. 音響処置装置全体の処理のフローチャートである。It is a flowchart of the process of the whole acoustic treatment apparatus. 本発明の第2実施形態に係る音響処理装置の構成図である。It is a block diagram of the sound processing apparatus which concerns on 2nd Embodiment of this invention. 帯域情報の説明図である。It is explanatory drawing of band information. 優先度情報の説明図である。It is explanatory drawing of priority information. 第2処理部の構成図である。It is a block diagram of a 2nd process part. 第2処理の説明図である。It is explanatory drawing of a 2nd process. 第2処理により調整したパラメータの表示例である。It is a display example of the parameter adjusted by the second process. 音響処置装置全体の処理のフローチャートである。It is a flowchart of the process of the whole acoustic treatment apparatus. 本発明の第3実施形態に係る音響処理装置の構成図である。It is a block diagram of the sound processing apparatus which concerns on 3rd Embodiment of this invention. 優先度情報の説明図である。It is explanatory drawing of priority information. 第3処理の説明図である。It is explanatory drawing of a 3rd process. 第3処理に適用されるパラメータの表示例である。It is a display example of a parameter applied to the third process. 音響処置装置全体の処理のフローチャートである。It is a flowchart of the process of the whole acoustic treatment apparatus.
<第1実施形態>
 図1は、本発明の第1実施形態に係る音響処理装置100の構成図である。図1に例示される通り、第1実施形態の音響処理装置100には複数(N個)の信号供給装置12_1~12_Nと放音装置14とが接続される。信号供給装置12_n(n=1~N)は、音響信号x_nを音響処理装置100に供給する。音響信号x_nは、楽器10_nの演奏音の波形を示す時間領域信号である。具体的には、信号供給装置12_nは、楽器10_nの演奏音を収音して音響信号x_nを生成する収音装置である。なお、例えば弦等の発音源の振動を検出するピックアップを信号供給装置12として内蔵した電気楽器を楽器10_nとして利用することも可能である。また、可搬型または内蔵型の記録媒体から楽器10_nの演奏音に対応する音響信号x_nを取得して音響処理装置100に供給する再生装置、または、通信網から楽器10_nの演奏音に対応する音響信号x_nを受信して音響処理装置100に供給する通信装置を信号供給装置12として採用することも可能である。なお、音響信号x_nをアナログからデジタルに変換するA/D変換器の図示は便宜的に省略した。
<First Embodiment>
FIG. 1 is a configuration diagram of a sound processing apparatus 100 according to the first embodiment of the present invention. As illustrated in FIG. 1, a plurality (N) of signal supply devices 12_1 to 12_N and a sound emission device 14 are connected to the sound processing device 100 of the first embodiment. The signal supply device 12_n (n = 1 to N) supplies the acoustic signal x_n to the acoustic processing device 100. The acoustic signal x_n is a time domain signal indicating the waveform of the performance sound of the musical instrument 10_n. Specifically, the signal supply device 12_n is a sound collection device that collects a performance sound of the musical instrument 10_n and generates an acoustic signal x_n. It is also possible to use, as the instrument 10_n, an electric musical instrument in which a pickup for detecting vibration of a sound source such as a string is incorporated as the signal supply device 12. Also, a playback device that acquires an acoustic signal x_n corresponding to the performance sound of the musical instrument 10_n from a portable or built-in recording medium and supplies the acoustic signal x_n to the acoustic processing device 100, or an acoustic signal corresponding to the performance sound of the musical instrument 10_n from a communication network A communication device that receives the signal x_n and supplies the signal to the sound processing device 100 may be employed as the signal supply device 12. The illustration of the A / D converter that converts the acoustic signal x_n from analog to digital is omitted for convenience.
 第1実施形態の音響処理装置100は、N個の信号供給装置12_1~12_Nからそれぞれ供給されるN個(Nチャンネル)の音響信号x_1~x_Nの各々に対する調整で生成されたN個の音響信号y_1~y_Nを混合して音響信号zを生成する。すなわち、第1実施形態の音響処理装置100は、Nチャンネルのミキシングシステムである。放音装置14(例えばスピーカまたはヘッドホン)は、音響処理装置100が生成した音響信号zに応じた音波を放音する。なお、音響信号zをデジタルからアナログに変換するD/A変換器、および、音響信号zを増幅する増幅器等の図示は便宜的に省略した。なお、第1実施形態では音響信号zを放音装置14に供給して再生する場合を例示するが、記憶装置24に音響信号zを格納すること、または、音響処理装置100とは別個の再生装置等に音響信号zを送信することも可能である。 The acoustic processing apparatus 100 according to the first embodiment includes N acoustic signals generated by adjusting each of N (N-channel) acoustic signals x_1 to x_N respectively supplied from N signal supply apparatuses 12_1 to 12_N. An acoustic signal z is generated by mixing y_1 to y_N. That is, the sound processing apparatus 100 according to the first embodiment is an N-channel mixing system. The sound emitting device 14 (for example, a speaker or headphones) emits sound waves according to the acoustic signal z generated by the sound processing device 100. Illustration of a D / A converter that converts the acoustic signal z from digital to analog, an amplifier that amplifies the acoustic signal z, and the like is omitted for convenience. In the first embodiment, the case where the acoustic signal z is supplied to the sound emitting device 14 and reproduced is illustrated, but the acoustic signal z is stored in the storage device 24 or reproduced separately from the acoustic processing device 100. It is also possible to transmit the acoustic signal z to a device or the like.
 図1に例示される通り、音響処理装置100は、演算処理装置22と記憶装置24とを具備するコンピュータシステムで実現される。記憶装置24は、演算処理装置22が実行するプログラムおよび演算処理装置22が使用する各種のデータを記憶する。半導体記録媒体および磁気記録媒体等の公知の記録媒体または複数種の記録媒体の組合せが記憶装置24として任意に採用され得る。音響信号x_nを記憶装置24に記憶した構成(したがって信号供給装置12_nは省略される)も好適である。 As illustrated in FIG. 1, the sound processing device 100 is realized by a computer system including an arithmetic processing device 22 and a storage device 24. The storage device 24 stores a program executed by the arithmetic processing device 22 and various data used by the arithmetic processing device 22. A known recording medium such as a semiconductor recording medium and a magnetic recording medium or a combination of a plurality of types of recording media can be arbitrarily employed as the storage device 24. A configuration in which the acoustic signal x_n is stored in the storage device 24 (therefore, the signal supply device 12_n is omitted) is also preferable.
 第1実施形態の記憶装置24は、相異なる楽器の種類にそれぞれ対応する複数の目標特性Rを記憶する。楽器の種類は、例えば発音源の名称(具体的には楽器名または演奏パート名)である。第1実施形態では、ギターおよびバスドラム等の楽器名を楽器の種類として例示する。第1実施形態の目標特性Rは、周波数特性である。具体的には、楽器の任意の1種類に対応する目標特性Rは、その種類の楽器の演奏音の特性として目標となる好適な音響特性である。目標特性Rの生成方法は任意であり、例えば、音楽CD等の記録媒体に記録された既存の音響データを解析することで目標特性Rが生成される。また、音響技術者等の制作者が手動で目標特性Rを生成することも可能である。 The storage device 24 of the first embodiment stores a plurality of target characteristics R respectively corresponding to different instrument types. The type of instrument is, for example, the name of the sound source (specifically, instrument name or performance part name). In the first embodiment, instrument names such as guitar and bass drum are exemplified as the types of instruments. The target characteristic R of the first embodiment is a frequency characteristic. Specifically, the target characteristic R corresponding to any one type of musical instrument is a suitable acoustic characteristic that is a target as the characteristic of the performance sound of that type of musical instrument. The generation method of the target characteristic R is arbitrary. For example, the target characteristic R is generated by analyzing existing acoustic data recorded on a recording medium such as a music CD. It is also possible for a producer such as an acoustic engineer to manually generate the target characteristic R.
 演算処理装置22は、記憶装置24に記憶されたプログラムを実行することで、N個の音響信号x_1~x_Nから音響信号zを生成するための複数の機能(特定部32および調整部34)を実現する。なお、演算処理装置22の各機能を複数の装置に分散した構成、または、専用の電子回路(例えばDSP)が演算処理装置22の一部の機能を実現する構成も採用され得る。 The arithmetic processing device 22 executes a program stored in the storage device 24, thereby providing a plurality of functions (specification unit 32 and adjustment unit 34) for generating the acoustic signal z from the N acoustic signals x_1 to x_N. Realize. A configuration in which each function of the arithmetic processing device 22 is distributed to a plurality of devices, or a configuration in which a dedicated electronic circuit (for example, DSP) realizes a part of the functions of the arithmetic processing device 22 may be employed.
 特定部32は、N個の信号供給装置12_1~12_Nからそれぞれ供給されるN個の音響信号x_1~x_Nの各々を解析することで、各音響信号x_nが表す演奏音に対応する楽器の種類A_nを特定する。具体的には、特定部32は、音響信号x_nから解析される特徴量を、相異なる楽器の種類について事前に用意された参照用の複数の特徴量の各々と比較し、参照用の複数の特徴量のうち音響信号x_nの特徴量に類似する特徴量に対応する楽器の種類を楽器10_nの種類A_nとして特定する。種類A_nの特定に利用される特徴量としては、例えば楽器の演奏音の音色の特性を表すMFCC(Mel-Frequency Cepstrum Coefficient)が好適である。楽器の種類A_nの特定には、特願2015-191026および特願2015-191928に記載された技術が採用される。例えば、例えばSVM(Support Vector Machine)等のパターン認識アルゴリズムが種類A_nの特定に利用され得る。 The specifying unit 32 analyzes each of the N acoustic signals x_1 to x_N supplied from the N signal supply devices 12_1 to 12_N, respectively, and thereby the musical instrument type A_n corresponding to the performance sound represented by each acoustic signal x_n. Is identified. Specifically, the specifying unit 32 compares the feature amount analyzed from the acoustic signal x_n with each of a plurality of reference feature amounts prepared in advance for different instrument types, and a plurality of reference features. The type of musical instrument corresponding to the characteristic amount similar to the characteristic amount of the acoustic signal x_n among the characteristic amounts is specified as the type A_n of the musical instrument 10_n. As the feature quantity used for specifying the type A_n, for example, MFCC (Mel-Frequency 色 Cepstrum Coefficient) representing the tone color characteristic of the performance sound of the musical instrument is suitable. The technique described in Japanese Patent Application No. 2015-191026 and Japanese Patent Application No. 2015-191928 is employed to specify the type of musical instrument A_n. For example, a pattern recognition algorithm such as SVM (Support Vector Vector Machine) can be used to specify the type A_n.
 調整部34は、特定部32が特定した楽器の種類A_nに応じて各音響信号x_nを調整することで音響信号zを生成する。第1実施形態の調整部34は、N個の音響信号x_1~x_Nにそれぞれ対応するN個の第1処理部341_1~341_Nと、混合部343とを具備する。第1処理部341_nは、図2に例示される通り、特定部32が特定した楽器の種類A_nに対応する目標特性R(点線)に音響信号x_nの周波数特性X_n(実線)を近付ける第1処理を、音響信号x_nに対して実行する。 The adjusting unit 34 generates the acoustic signal z by adjusting each acoustic signal x_n according to the instrument type A_n specified by the specifying unit 32. The adjustment unit 34 of the first embodiment includes N first processing units 341_1 to 341_N corresponding to the N acoustic signals x_1 to x_N, respectively, and a mixing unit 343. As illustrated in FIG. 2, the first processing unit 341 </ b> _n performs first processing for bringing the frequency characteristic X_n (solid line) of the acoustic signal x_n closer to the target characteristic R (dotted line) corresponding to the instrument type A_n specified by the specifying unit 32. Is performed on the acoustic signal x_n.
 図3は、任意の1個の第1処理部341_nの構成図である。第1処理部341_nは、周波数解析部40と特性付与部50と波形生成部60とを具備し、音響信号x_nに対して第1処理を実行することで音響信号y_nを生成する。つまり、音響信号y_nは、楽器の種類A_nに対応する目標特性Rに近い音響特性を有する。 FIG. 3 is a configuration diagram of an arbitrary first processing unit 341_n. The first processing unit 341_n includes a frequency analysis unit 40, a characteristic imparting unit 50, and a waveform generation unit 60, and generates the acoustic signal y_n by executing a first process on the acoustic signal x_n. That is, the acoustic signal y_n has an acoustic characteristic close to the target characteristic R corresponding to the musical instrument type A_n.
 図3の周波数解析部40は、信号供給装置12_nから供給される音響信号x_nを解析することで、音響信号x_nの周波数特性(周波数スペクトル)X_nを時間軸上の単位区間(フレーム)毎に順次に生成する。周波数特性X_nの算定には、短時間フーリエ変換等の公知の周波数解析が任意に採用され得る。なお、通過帯域が相違する複数の帯域通過フィルタの系列(フィルタバンク)を周波数解析部40として利用することも可能である。 The frequency analysis unit 40 in FIG. 3 analyzes the acoustic signal x_n supplied from the signal supply device 12_n, thereby sequentially obtaining the frequency characteristics (frequency spectrum) X_n of the acoustic signal x_n for each unit section (frame) on the time axis. To generate. For the calculation of the frequency characteristic X_n, a known frequency analysis such as a short-time Fourier transform can be arbitrarily employed. Note that it is also possible to use a series (filter bank) of a plurality of bandpass filters having different passbands as the frequency analysis unit 40.
 特性付与部50は、音響信号x_nの周波数特性X_nを、特定部32が特定した楽器の種類A_nに対応する目標特性Rに近付けることで、音響信号y_nの周波数特性(周波数スペクトル)Y_nを単位区間毎に順次に生成する。 The characteristic imparting unit 50 brings the frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R corresponding to the instrument type A_n identified by the identifying unit 32, so that the frequency characteristic (frequency spectrum) Y_n of the acoustic signal y_n is a unit interval. It generates sequentially every time.
 図4は、特性付与部50の処理の説明図である。具体的には、特性付与部50は、図4に例示される通り、特性が相違する複数の帯域通過フィルタにより音響信号x_nの周波数特性X_nを目標特性Rに近付けるイコライジング処理を第1処理として実行する。各帯域通過フィルタの特性は、周波数ポイント(例えば中心周波数)、ゲインおよびQ値等のパラメータで規定される。特性付与部50は、周波数特性X_nが目標特性Rに近づくように複数の帯域通過フィルタの各々のパラメータを計算したうえで周波数特性X_nに各帯域通過フィルタを作用させることで音響信号y_nの周波数特性Y_nを生成する。なお、帯域通過フィルタの総数は任意である。総数を多くすれば音響信号x_nを調整する能力は向上するが、調整部34(特性付与部50)の処理負荷、ひいては音響処理装置100全体の処理負荷が大きくなるため、第1実施形態では4つとしている。帯域通過フィルタの総数を可変に制御することも可能である。 FIG. 4 is an explanatory diagram of the processing of the characteristic assigning unit 50. Specifically, as illustrated in FIG. 4, the characteristic assigning unit 50 executes, as a first process, an equalizing process that brings the frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R using a plurality of bandpass filters having different characteristics. To do. The characteristics of each bandpass filter are defined by parameters such as a frequency point (for example, center frequency), a gain, and a Q value. The characteristic assigning unit 50 calculates each parameter of the plurality of bandpass filters so that the frequency characteristic X_n approaches the target characteristic R, and then applies each bandpass filter to the frequency characteristic X_n so that the frequency characteristic of the acoustic signal y_n is obtained. Y_n is generated. The total number of band pass filters is arbitrary. If the total number is increased, the ability to adjust the acoustic signal x_n is improved, but the processing load of the adjustment unit 34 (characteristic imparting unit 50) and, consequently, the processing load of the entire acoustic processing apparatus 100 increase. I am trying. It is also possible to variably control the total number of band pass filters.
 図3の波形生成部60は、特性付与部50が単位区間毎に生成した周波数特性Y_nから時間領域の音響信号y_nを生成する。音響信号y_nの生成には短時間逆フーリエ変換が好適に利用される。 3 generates a time-domain acoustic signal y_n from the frequency characteristic Y_n generated by the characteristic applying unit 50 for each unit section. Short-time inverse Fourier transform is preferably used for generating the acoustic signal y_n.
 図1の混合部343は、各第1処理部341_n(波形生成部60)が生成した音響信号y_nを混合することで音響信号zを生成する。すなわち、相異なる種類のN個の楽器10_1~10_Nの演奏音の混合音を表す音響信号zが生成される。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する。 1 mixes the acoustic signal y_n generated by each first processing unit 341_n (waveform generation unit 60) to generate the acoustic signal z. That is, an acoustic signal z representing a mixed sound of performance sounds of N different types of N musical instruments 10_1 to 10_N is generated. The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
 図5は、音響処理装置100全体の処理のフローチャートである。演奏者による楽器(例えばバスドラム)の演奏開始を契機として、図5の処理が開始される。第1処理部341_nは、バスドラムの演奏音を収音して生成された音響信号x_nを取得する(SA1)。特定部32は、N個の音響信号x_1~x_Nの各々について、音響信号x_nが表す演奏音に対応する楽器の種類A_nを特定する(SA2)。例えば、楽器10_2が生成した音響信号x_2の解析で「バスドラム」という種類が特定され、楽器10_3が生成した音響信号x_3の解析で「ベース」という種類が特定される。第1処理部341_n(周波数解析部40)は、各音響信号x_nの解析により、各音響信号x_nの周波数特性(周波数スペクトル)X_nを時間軸上の単位区間(フレーム)毎に順次に生成する(SA3)。第1処理部341_n(特性付与部50)は、音響信号x_nの各周波数特性X_nを特定部32が特定した楽器の種類A_n(バスドラム)に対応する目標特性Rに近付ける第1処理を実行することで、各音響信号y_nの周波数特性(周波数スペクトル)Y_nを単位区間毎に順次に生成する(SA4)。第1処理部341_n(波形生成部60)は、特性付与部50が生成した各周波数特性Y_nから時間領域の音響信号y_nを生成する(SA5)。つまり、楽器の種類A_n(バスドラム)の目標特性Rが表す周波数特性に近い周波数特性Y_nを持つ音響信号y_nが生成される。調整部34(混合部343)は、各第1処理部341_nが生成した音響信号y_nを混合することで音響信号zを生成する(SA6)。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する(SA7)。 FIG. 5 is a flowchart of processing of the entire sound processing apparatus 100. The process in FIG. 5 is started when the performer starts playing a musical instrument (for example, a bass drum). The first processing unit 341_n acquires the acoustic signal x_n generated by collecting the performance sound of the bass drum (SA1). The identifying unit 32 identifies, for each of the N acoustic signals x_1 to x_N, the musical instrument type A_n corresponding to the performance sound represented by the acoustic signal x_n (SA2). For example, the type “bass drum” is specified by the analysis of the acoustic signal x_2 generated by the musical instrument 10_2, and the type “base” is specified by the analysis of the acoustic signal x_3 generated by the musical instrument 10_3. The first processing unit 341_n (frequency analysis unit 40) sequentially generates a frequency characteristic (frequency spectrum) X_n of each acoustic signal x_n for each unit section (frame) on the time axis by analyzing each acoustic signal x_n ( SA3). The first processing unit 341_n (characteristic imparting unit 50) executes a first process that brings each frequency characteristic X_n of the acoustic signal x_n closer to the target characteristic R corresponding to the instrument type A_n (bass drum) identified by the identifying unit 32. Thus, the frequency characteristic (frequency spectrum) Y_n of each acoustic signal y_n is sequentially generated for each unit section (SA4). The first processing unit 341_n (waveform generating unit 60) generates a time domain acoustic signal y_n from each frequency characteristic Y_n generated by the characteristic providing unit 50 (SA5). That is, an acoustic signal y_n having a frequency characteristic Y_n close to the frequency characteristic represented by the target characteristic R of the instrument type A_n (bass drum) is generated. The adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the acoustic signals y_n generated by the first processing units 341_n (SA6). The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7).
 以上の説明から理解される通り、特定部32が特定した楽器の種類A_nに応じて音響信号x_nが調整される。したがって、音響信号x_nに対する調整内容を利用者が指示する構成と比較して、調整内容および音響特性に関する専門的な知識を必要とせずに、音響信号x_nを適切に調整することが可能である。また、第1実施形態では特に、音響信号x_nの周波数特性X_nが目標特性Rに近付くように(両者間の相違に応じて)各帯域通過フィルタの特性を制御する。したがって、音響信号x_nを処理するフィルタの特性が固定された構成(音響信号x_nの周波数特性X_nに依存しない構成)と比較して、音響信号x_nが既に目標特性Rに近い場合は、音響信号x_nを調整する度合が小さくなる。つまり、本来の音響信号x_nの周波数特性X_nを生かしつつ目標特性Rに音響信号x_nを近付けることが可能である。 As understood from the above description, the acoustic signal x_n is adjusted according to the musical instrument type A_n specified by the specifying unit 32. Therefore, it is possible to appropriately adjust the acoustic signal x_n without requiring specialized knowledge regarding the adjustment content and the acoustic characteristics as compared with the configuration in which the user specifies the adjustment content for the acoustic signal x_n. In the first embodiment, in particular, the characteristics of each bandpass filter are controlled so that the frequency characteristic X_n of the acoustic signal x_n approaches the target characteristic R (according to the difference between the two). Therefore, when the acoustic signal x_n is already close to the target characteristic R as compared with the configuration in which the characteristics of the filter that processes the acoustic signal x_n are fixed (configuration that does not depend on the frequency characteristic X_n of the acoustic signal x_n), the acoustic signal x_n The degree of adjustment is reduced. That is, it is possible to bring the acoustic signal x_n closer to the target characteristic R while making use of the frequency characteristic X_n of the original acoustic signal x_n.
<第2実施形態>
 本発明の第2実施形態について説明する。なお、以下に例示する各形態において作用および機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。
Second Embodiment
A second embodiment of the present invention will be described. In addition, about the element which an effect | action and a function are the same as that of 1st Embodiment in each form illustrated below, the code | symbol used by description of 1st Embodiment is diverted, and each detailed description is abbreviate | omitted suitably.
 図6は、第2実施形態に係る音響処理装置100の構成図である。第2実施形態の調整部34は、第1実施形態と同様の複数(N個)の第1処理部341_1~341_Nおよび混合部343に第2処理部345を追加した構成である。第1処理部341_nは、第1実施形態と同様に、音響信号x_nに対する第1処理で音響信号y_nを生成する。第2処理部345は、各第1処理部341_nによる処理後の音響信号y_nから音響信号w_nを生成する。混合部343は、第2処理部345による処理後のN個の音響信号w_1~w_Nを混合することで音響信号zを生成する。 FIG. 6 is a configuration diagram of the sound processing apparatus 100 according to the second embodiment. The adjustment unit 34 of the second embodiment has a configuration in which a second processing unit 345 is added to a plurality (N) of first processing units 341_1 to 341_N and a mixing unit 343 similar to those of the first embodiment. As in the first embodiment, the first processing unit 341_n generates the acoustic signal y_n through the first processing on the acoustic signal x_n. The second processing unit 345 generates an acoustic signal w_n from the acoustic signal y_n processed by each first processing unit 341_n. The mixing unit 343 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N processed by the second processing unit 345.
 第2実施形態の記憶装置24は、目標特性Rと帯域情報Tと優先度情報P1とを記憶する。目標特性Rは、第1実施形態と同様である。図7は、帯域情報Tの説明図である。帯域情報Tは、種類A_nの楽器の演奏音が優勢に含む音響成分の周波数帯域(以下「発音帯域」という)を指定する。具体的には、帯域情報Tは、図7に例示される通り、「低域」「中域」「高域」の何れかを発音帯域として指定する。例えば、楽器の種類A_nが「ベース」の場合、「ベース」に対応する発音帯域は「低域」である。 The storage device 24 of the second embodiment stores target characteristics R, band information T, and priority information P1. The target characteristic R is the same as that in the first embodiment. FIG. 7 is an explanatory diagram of the band information T. The band information T designates a frequency band (hereinafter referred to as “sound generation band”) of an acoustic component that is predominantly included in the performance sound of the musical instrument of type A_n. Specifically, as illustrated in FIG. 7, the band information T specifies any one of “low band”, “middle band”, and “high band” as the sound generation band. For example, when the instrument type A_n is “bass”, the sounding band corresponding to “bass” is “low range”.
 図8は、優先度情報P1の説明図である。優先度情報P1は、周波数軸上の複数の帯域B1~BMの各々について楽器の種類毎に優先度を指定する。具体的には、優先度情報P1は、図8に例示される通り、例えば整数で優先度を指定する。例えば、帯域B1では優先度は「キーボード=1,バスドラム=3,ベース=4,ギター=2,…」と指定されているので、「…>ベース>バスドラム>ギター>キーボード…」の順で優先度は高い。なお、各帯域B1~BMの帯域幅および総数は任意である。例えば、複数の帯域B1~BMを3つの帯域B1~B3とし、各帯域B1~B3を帯域情報Tで示す各発音帯域「低域」「中域」「高域」に対応させることも可能である。 FIG. 8 is an explanatory diagram of the priority information P1. The priority information P1 specifies the priority for each type of musical instrument for each of the plurality of bands B1 to BM on the frequency axis. Specifically, as illustrated in FIG. 8, the priority information P1 specifies the priority with an integer, for example. For example, in the band B1, the priority is designated as “keyboard = 1, bass drum = 3, bass = 4, guitar = 2,...”, So “...> bass> bass drum> guitar> keyboard ...”. And the priority is high. The bandwidth and total number of each of the bands B1 to BM are arbitrary. For example, a plurality of bands B1 to BM can be made into three bands B1 to B3, and each band B1 to B3 can correspond to each of the sound generation bands “low range”, “middle range”, and “high range” indicated by the band information T. is there.
 第2処理部345は、N個の音響信号y_1~y_Nのうち、特定部32が特定した楽器の種類A_nに対応する発音帯域が重なる音響信号y_n1(第1音響信号の例示)と音響信号y_n2(第2音響信号の例示)(n1≠n2)とについて、音響信号y_n1および音響信号y_n2の一方の音響成分を他方に対して相対的に抑制する第2処理を、一方の音響信号y_n(音響信号y_n1または音響信号y_n2)に対して実行することで、音響信号w_n1または音響信号w_n2を生成する。音響信号y_n1と音響信号y_n2とは、相異なるチャンネルに対応する。 The second processing unit 345 includes an acoustic signal y_n1 (example of the first acoustic signal) and an acoustic signal y_n2 in which the sound generation bands corresponding to the instrument type A_n identified by the identifying unit 32 among the N acoustic signals y_1 to y_N. (Exemplary second acoustic signal) For (n1 ≠ n2), the second process of suppressing one acoustic component of the acoustic signal y_n1 and the acoustic signal y_n2 relative to the other is performed with respect to one acoustic signal y_n (acoustic The acoustic signal w_n1 or the acoustic signal w_n2 is generated by executing the signal y_n1 or the acoustic signal y_n2). The acoustic signal y_n1 and the acoustic signal y_n2 correspond to different channels.
 図9は、第2処理部345の構成図である。第2処理部345は、周波数解析部70と抑制部80と波形生成部90とを具備する。周波数解析部70は、各第1処理部341_nが生成した音響信号y_nを解析することで、周波数解析部40が周波数特性X_nを生成するのと同様に、音響信号y_nの周波数特性Y_nを時間軸上の単位区間毎に順次に生成する。なお、各第1処理部341_nの特性付与部50が生成した周波数特性Y_nを抑制部80に供給すること(したがって、各第1処理部341_nの波形生成部60と第2処理部345の周波数解析部70とは省略される)も可能である。 FIG. 9 is a configuration diagram of the second processing unit 345. The second processing unit 345 includes a frequency analysis unit 70, a suppression unit 80, and a waveform generation unit 90. The frequency analysis unit 70 analyzes the acoustic signal y_n generated by each first processing unit 341_n, so that the frequency analysis unit 40 generates the frequency characteristic Y_n of the acoustic signal y_n in the same manner as the frequency analysis unit 40 generates the frequency characteristic X_n. Generated sequentially for each upper unit section. The frequency characteristic Y_n generated by the characteristic applying unit 50 of each first processing unit 341_n is supplied to the suppression unit 80 (therefore, the frequency analysis of the waveform generating unit 60 and the second processing unit 345 of each first processing unit 341_n). (It is omitted from the section 70).
 抑制部80は、特定部32が特定した楽器の種類A_nに対応する発音帯域が音響信号y_n1と音響信号y_n2との間で重なる場合(例えば双方とも「低域」である場合)に、音響信号y_n1または音響信号y_n2に対して第2処理を実行することで、音響信号w_n1の周波数特性W_n1または音響信号w_n2の周波数特性W_n2とを単位区間毎に順次に生成する。第2実施形態では、発音帯域が相互に重なる音響信号y_n1および音響信号y_n2のうちの一方である音響信号y_n1に対して第2処理を実行することで、音響信号w_n1の周波数特性W_n1を生成する構成を例示する。音響信号y_n2と、発音帯域が他の楽器と重複しない楽器の各音響信号y_nとは、そのまま音響信号w_nとして第2処理部345から混合部343に供給される。 When the sound generation band corresponding to the instrument type A_n identified by the identifying unit 32 overlaps between the acoustic signal y_n1 and the acoustic signal y_n2 (for example, both are “low range”), the suppressing unit 80 By performing the second process on y_n1 or the acoustic signal y_n2, the frequency characteristic W_n1 of the acoustic signal w_n1 or the frequency characteristic W_n2 of the acoustic signal w_n2 is sequentially generated for each unit section. In the second embodiment, the frequency characteristic W_n1 of the acoustic signal w_n1 is generated by executing the second process on the acoustic signal y_n1 that is one of the acoustic signal y_n1 and the acoustic signal y_n2 whose sound generation bands overlap each other. The configuration is illustrated. The acoustic signal y_n2 and the acoustic signals y_n of the musical instruments whose sound generation bands do not overlap with other musical instruments are supplied as they are as the acoustic signals w_n from the second processing unit 345 to the mixing unit 343.
 具体的には、抑制部80は、発音帯域が相互に重なる音響信号y_n1および音響信号y_n2のうち、優先度が低い楽器の種類に対応する音響成分を優先度が高い楽器の種類に対応する音響成分に対して相対的に抑制するための第2処理を、音響信号y_n1の周波数特性Y_n1に対して帯域B1~BMの各々について行う。第2処理のうち、音響信号y_n1の周波数特性Y_n1を調整する処理には、公知の技術が任意に採用され得る。例えば、帯域B1~BM毎に周波数特性Y_n1のゲインを特性が相違する複数の帯域通過フィルタにより調整するイコライジング処理である。各帯域通過フィルタの特性は、周波数ポイント(例えば中心周波数)、ゲインおよびQ値等のパラメータで規定される。抑制部80は、周波数特性Y_n1が抑制されるように複数の帯域通過フィルタの各々のパラメータを計算したうえで周波数特性Y_n1に各帯域通過フィルタを作用させることで音響信号w_nの周波数特性W_n1を生成する。 Specifically, the suppression unit 80 selects the acoustic component corresponding to the instrument type with the lower priority among the acoustic signals y_n1 and y_n2 with the sound generation bands overlapping each other, and the acoustic component corresponding to the instrument type with the higher priority. The second process for suppressing relative to the component is performed for each of the bands B1 to BM with respect to the frequency characteristic Y_n1 of the acoustic signal y_n1. In the second process, a known technique can be arbitrarily employed for the process of adjusting the frequency characteristic Y_n1 of the acoustic signal y_n1. For example, an equalizing process for adjusting the gain of the frequency characteristic Y_n1 for each of the bands B1 to BM by using a plurality of bandpass filters having different characteristics. The characteristics of each bandpass filter are defined by parameters such as a frequency point (for example, center frequency), a gain, and a Q value. The suppression unit 80 calculates each parameter of the plurality of bandpass filters so that the frequency characteristic Y_n1 is suppressed, and generates the frequency characteristic W_n1 of the acoustic signal w_n by applying each bandpass filter to the frequency characteristic Y_n1. To do.
 発音帯域が他の楽器と重複する音響信号y_n1の周波数特性Y_n1に対する第2処理で生成された周波数特性W_n1が波形生成部90に出力されるとともに、それ以外の音響信号y_nの周波数特性Y_nが周波数特性W_nとして波形生成部90に出力される。波形生成部90は、抑制部80が単位区間毎に生成した周波数特性W_nから時間領域の音響信号w_nを生成する。音響信号w_nの生成には短時間逆フーリエ変換が好適に利用される。 The frequency characteristic W_n1 generated in the second process for the frequency characteristic Y_n1 of the acoustic signal y_n1 whose sound generation band overlaps with another musical instrument is output to the waveform generation unit 90, and the frequency characteristic Y_n of the other acoustic signal y_n is the frequency. It is output to the waveform generator 90 as the characteristic W_n. The waveform generation unit 90 generates a time domain acoustic signal w_n from the frequency characteristic W_n generated by the suppression unit 80 for each unit section. Short-time inverse Fourier transform is preferably used for generating the acoustic signal w_n.
 図10は、第2処理の説明図である。音響信号y_n1と音響信号y_n2とで発音帯域が重なる場合(つまり、楽器の種類A_n1の帯域情報T_n1と楽器の種類A_n2の帯域情報T_n2とが同じ場合)、抑制部80は、優先度情報P1を参照して第2処理を行う。第2処理は、図10に例示される通り、音響信号y_n1の各帯域B1~BMの音響成分を、音響信号y_n1の優先度と音響信号y_n2の優先度との高低に応じて抑制または強調する処理である。具体的には、楽器の種類A_n1の優先度が楽器の種類A_n2の優先度を下回る帯域B1では、周波数特性Y_n1の調整により音響信号y_n1の音響成分が抑制される。他方、楽器の種類A_n1の優先度が楽器の種類A_n2の優先度を上回る帯域B2および帯域B3では、周波数特性Y_n1の調整により音響信号y_n1の音響成分が強調される。つまり、周波数特性Y_n2を周波数特性Y_n1に対して抑制している。すなわち、音響成分(周波数特性)の相対的な抑制とは、一方の音響成分を他方の音響成分に対して抑制する場合と、他方の音響成分を一方の音響成分に対して強調する場合との双方を含む。 FIG. 10 is an explanatory diagram of the second process. When the sound generation band overlaps between the acoustic signal y_n1 and the acoustic signal y_n2 (that is, when the band information T_n1 of the instrument type A_n1 and the band information T_n2 of the instrument type A_n2 are the same), the suppression unit 80 sets the priority information P1. The second process is performed with reference to. As illustrated in FIG. 10, the second process suppresses or emphasizes the acoustic components of the bands B1 to BM of the acoustic signal y_n1 according to the level of the priority of the acoustic signal y_n1 and the priority of the acoustic signal y_n2. It is processing. Specifically, in the band B1 where the priority of the musical instrument type A_n1 is lower than the priority of the musical instrument type A_n2, the acoustic component of the acoustic signal y_n1 is suppressed by adjusting the frequency characteristic Y_n1. On the other hand, in the band B2 and the band B3 in which the priority of the musical instrument type A_n1 exceeds the priority of the musical instrument type A_n2, the acoustic component of the acoustic signal y_n1 is emphasized by adjusting the frequency characteristic Y_n1. That is, the frequency characteristic Y_n2 is suppressed with respect to the frequency characteristic Y_n1. That is, relative suppression of an acoustic component (frequency characteristic) includes a case where one acoustic component is suppressed with respect to the other acoustic component, and a case where the other acoustic component is emphasized with respect to one acoustic component. Includes both.
 以上の説明から理解される通り、楽器の種類A_n1と楽器の種類A_n2との間で発音帯域が重なる場合でも、楽器の種類A_n1に対応し第2処理により周波数特性Y_n1から生成された周波数特性W_n1および楽器の種類A_n2に対応する周波数特性Y_n2の一方における周波数特性は、他方に対して相対的に抑制される。周波数特性Y_n1のピークと周波数特性Y_n2のピークとは、第2処理の実行前には周波数軸上で相互に近接した位置にあるが、第2処理の結果、両者のピークは周波数軸上で相互にずれた位置に移動する。例えば、第2処理をイコライザ(例えばグラフィックイコライザまたはパラメトリックイコライザ等)により行う場合、各帯域B1~BMに対応するゲインを優先度情報P1に応じて調整して、図11に例示される通り、音響信号y_n1の各帯域B1~BMに対応するゲインを音響処理装置100の表示装置(図示略)に利用者が視覚的に把握できるように表示させることも可能である。以上の説明から理解される通り、記憶装置24に記憶されたプログラムを演算処理装置22が実行することで、第2処理により優先度情報P1に応じて調整された各帯域B1~BMのパラメータ(例えばゲインまたはQ値)を表示装置に表示させる機能が実現される。 As understood from the above description, even when the sound generation bands overlap between the musical instrument type A_n1 and the musical instrument type A_n2, the frequency characteristic W_n1 generated from the frequency characteristic Y_n1 by the second processing corresponding to the musical instrument type A_n1. The frequency characteristic in one of the frequency characteristics Y_n2 corresponding to the instrument type A_n2 is suppressed relatively to the other. The peak of the frequency characteristic Y_n1 and the peak of the frequency characteristic Y_n2 are close to each other on the frequency axis before the execution of the second process. However, as a result of the second process, both peaks are mutually on the frequency axis. Move to a position shifted to. For example, when the second process is performed by an equalizer (for example, a graphic equalizer or a parametric equalizer), the gain corresponding to each band B1 to BM is adjusted according to the priority information P1, and as illustrated in FIG. The gain corresponding to each band B1 to BM of the signal y_n1 can be displayed on the display device (not shown) of the sound processing device 100 so that the user can visually grasp the gain. As understood from the above description, when the arithmetic processing unit 22 executes the program stored in the storage device 24, the parameters (B1 to BM) of the respective bands B1 to BM adjusted according to the priority information P1 by the second processing ( For example, a function of displaying a gain or a Q value on a display device is realized.
 図6の混合部343は、第2処理部345(抑制部80)が生成したN個の音響信号w_1~w_Nを混合することで音響信号zを生成する。すなわち、相異なる種類のN個の楽器10_1~10_Nの演奏音の混合音を表す音響信号zが生成される。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する。 The mixing unit 343 in FIG. 6 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the second processing unit 345 (the suppressing unit 80). That is, an acoustic signal z representing a mixed sound of performance sounds of N different types of N musical instruments 10_1 to 10_N is generated. The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
 図12は、第2実施形態における音響処理装置100全体の処理のフローチャートである。各演奏者による楽器(例えばバスドラムとベースとを含む)の演奏開始を契機として、図12の処理が開始される。第1処理部341_nによる音響信号x_nを取得する処理(SA1)から音響信号y_nを生成する処理(SA5)までは第1実施形態と同様である。楽器の種類A_nを特定する処理(SA2)では、例えば、楽器10_2が生成した音響信号x_2の解析から「バスドラム」という種類が特定され、楽器10_3が生成した音響信号x_3の解析から「ベース」という種類が特定される。なお、種類A_2が示す楽器「バスドラム」の発音帯域と種類A_3が示す楽器「ベース」の発音帯域とは双方とも「低域」であり、その他の(N-2)個の種類A_na(na≠2,3)が示す楽器の中には、同じ発音帯域の楽器は含まれないものとする。 FIG. 12 is a flowchart of processing of the entire sound processing apparatus 100 according to the second embodiment. The process shown in FIG. 12 is started when each performer starts playing a musical instrument (for example, bass drum and bass). The process from the process (SA1) for acquiring the acoustic signal x_n by the first processing unit 341_n to the process (SA5) for generating the acoustic signal y_n is the same as in the first embodiment. In the process (SA2) for specifying the type A_n of the instrument, for example, the type “bass drum” is specified from the analysis of the acoustic signal x_2 generated by the instrument 10_2, and “bass” is determined from the analysis of the acoustic signal x_3 generated by the instrument 10_3. Is specified. Note that the sound generation band of the musical instrument “bass drum” indicated by the type A_2 and the sound generation band of the musical instrument “bass” indicated by the type A_3 are both “low frequency”, and the other (N−2) types A_na (na The instruments indicated by ≠ 2, 3) do not include instruments with the same tone band.
 第2処理部345(周波数解析部70)は、第1処理部341_nが生成した各音響信号y_nの周波数特性Y_nを単位区間毎に順次に生成する(SB1)。第2処理部345(抑制部80)は、発音帯域が同じ楽器の種類A_2および種類A_3の一方の種類A_2に対応する周波数特性Y_2に対して、優先度情報P1を参照して第2処理を実行することで周波数特性W_2を単位区間毎に順次に生成するとともに、周波数特性Y_3および発音帯域が他の楽器と重複しない楽器の音響信号y_nの周波数特性Y_nを、そのまま周波数特性W_nとして出力する(SB2)。したがって、種類A_2が示す楽器「バスドラム」と種類A_3が示す楽器「ベース」との間で発音帯域が重なる場合でも、種類A_2(バスドラム)に対応する周波数特性W_2と種類A_3(ベース)に対応する周波数特性Y_3との一方が他方に対して相対的に抑制される。第2処理部345(波形生成部90)は、抑制部80が生成した周波数特性W_nから時間領域の音響信号w_nを生成する(SB3)。調整部34(混合部343)は、第2処理部345が生成したN個の音響信号w_1~w_Nを混合することで音響信号zを生成する(SA6)。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する(SA7)。なお、発音帯域が同じ楽器(バスドラムおよびベース)以外の楽器を示す種類A_naに対応する音響信号y_naについては、ステップSB1~SB3の処理を省略して、そのまま音響信号w_naとして供給することも可能である(SA6)。 The second processing unit 345 (frequency analysis unit 70) sequentially generates the frequency characteristic Y_n of each acoustic signal y_n generated by the first processing unit 341_n for each unit section (SB1). The second processing unit 345 (suppression unit 80) performs the second process with reference to the priority information P1 for the frequency characteristic Y_2 corresponding to one of the types A_2 and A_3 of the musical instruments having the same sound generation band. By executing this, the frequency characteristic W_2 is sequentially generated for each unit section, and the frequency characteristic Y_3 and the frequency characteristic Y_n of the acoustic signal y_n of the musical instrument whose sound generation band does not overlap with other musical instruments are output as the frequency characteristic W_n as it is ( SB2). Therefore, even when the sound generation band overlaps between the musical instrument “bass drum” indicated by the type A_2 and the musical instrument “bass” indicated by the type A_3, the frequency characteristics W_2 corresponding to the type A_2 (bass drum) and the type A_3 (bass) One of the corresponding frequency characteristics Y_3 is suppressed relative to the other. The second processing unit 345 (waveform generation unit 90) generates a time domain acoustic signal w_n from the frequency characteristic W_n generated by the suppression unit 80 (SB3). The adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the second processing unit 345 (SA6). The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7). Note that for the acoustic signal y_na corresponding to the type A_na indicating a musical instrument other than the musical instruments (bass drum and bass) having the same sound generation band, the processing of steps SB1 to SB3 can be omitted and the acoustic signal w_na can be supplied as it is. (SA6).
 第2実施形態においても第1実施形態と同様の効果が得られる。第2実施形態では特に、特定部32が特定した楽器の種類に対応する周波数帯域が音響信号y_n1と音響信号y_n2との間で重なる場合に、音響信号y_n1および音響信号y_n2の一方における当該周波数帯域の音響成分が他方に対して相対的に抑制されるように音響信号w_n1が生成される。したがって、相対的に抑制された音響成分に対応する演奏音に対して他方の音響成分に対応する演奏音が聞き取りやすくなるという利点がある。また、優先度情報P1を参照して第2処理を行っているので、音響信号y_n1の調整、すなわち音響信号w_n1の生成を適切に行うことが可能である。 In the second embodiment, the same effect as in the first embodiment can be obtained. Particularly in the second embodiment, when the frequency band corresponding to the type of instrument specified by the specifying unit 32 overlaps between the acoustic signal y_n1 and the acoustic signal y_n2, the frequency band in one of the acoustic signal y_n1 and the acoustic signal y_n2 The acoustic signal w_n1 is generated so that the acoustic component is suppressed relative to the other. Therefore, there is an advantage that the performance sound corresponding to the other acoustic component can be easily heard with respect to the performance sound corresponding to the relatively suppressed acoustic component. Further, since the second process is performed with reference to the priority information P1, it is possible to appropriately adjust the acoustic signal y_n1, that is, generate the acoustic signal w_n1.
<第3実施形態>
 図13は、第2実施形態に係る音響処理装置100の構成図である。第3実施形態の調整部34は、第1実施形態と同様の複数(N個)の第1処理部341_1~341_Nおよび混合部343に第3処理部347を追加した構成である。第1処理部341_nは、第1実施形態と同様に、音響信号x_nに対する第1処理で音響信号y_nを生成する。第3処理部347は、各第1処理部341_nによる処理後の音響信号y_nから音響信号v_nを生成する。混合部343は、第3処理部347による処理後のN個の音響信号v_1~v_Nを混合することで音響信号zを生成する。
<Third Embodiment>
FIG. 13 is a configuration diagram of the sound processing apparatus 100 according to the second embodiment. The adjustment unit 34 of the third embodiment has a configuration in which a third processing unit 347 is added to a plurality (N) of first processing units 341_1 to 341_N and a mixing unit 343 similar to those of the first embodiment. As in the first embodiment, the first processing unit 341_n generates the acoustic signal y_n through the first processing on the acoustic signal x_n. The third processing unit 347 generates an acoustic signal v_n from the acoustic signal y_n processed by each first processing unit 341_n. The mixing unit 343 generates the acoustic signal z by mixing the N acoustic signals v_1 to v_N processed by the third processing unit 347.
 第3実施形態の記憶装置24は、目標特性Rと帯域情報Tと優先度情報P2とを記憶する。目標特性Rは、第1実施形態と同様であり、帯域情報Tは、第2実施形態と同様である。図14は、優先度情報P2の説明図である。優先度情報P2は、図14に例示される通り、帯域情報Tが示す複数の発音帯域(具体的には「低域」「中域」「高域」)の各々について楽器の種類A_n毎に優先度を指定する。具体的には、優先度情報P2は、第2実施形態の優先度情報P1の場合と同様に、例えば整数で優先度を指定する。 The storage device 24 of the third embodiment stores target characteristics R, band information T, and priority information P2. The target characteristic R is the same as in the first embodiment, and the band information T is the same as in the second embodiment. FIG. 14 is an explanatory diagram of the priority information P2. As illustrated in FIG. 14, the priority information P <b> 2 is for each instrument type A_n for each of a plurality of sound generation bands (specifically, “low range”, “mid range”, “high range”) indicated by the band information T. Specify the priority. Specifically, the priority information P2 specifies the priority with an integer, for example, as in the case of the priority information P1 of the second embodiment.
 第3処理部347は、N個の音響信号y_1~y_Nのうち、特定部32が特定した楽器の種類A_nに対応する発音帯域が重なる(例えば双方とも「低域」である)音響信号y_n1(第1音響信号の例示)と音響信号y_n2(第2音響信号の例示)(n1≠n2)とについて、音響信号y_n1および音響信号y_n2の一方において時間軸上で他方と重なる音響成分を相対的に抑制する第3処理を実行することで、音響信号v_n1または音響信号v_n2を生成する。音響信号y_n1と音響信号y_n2とは、相異なるチャンネルに対応する。 The third processing unit 347 overlaps the sound generation bands corresponding to the instrument type A_n specified by the specifying unit 32 among the N sound signals y_1 to y_N (for example, both are “low range”). For the first acoustic signal) and the acoustic signal y_n2 (second acoustic signal) (n1 ≠ n2), one of the acoustic signal y_n1 and the acoustic signal y_n2 has a relative acoustic component that overlaps the other on the time axis. The acoustic signal v_n1 or the acoustic signal v_n2 is generated by executing the third process to be suppressed. The acoustic signal y_n1 and the acoustic signal y_n2 correspond to different channels.
 具体的には、第3処理部347は、優先度が低い楽器の種類に対応する音響信号を優先度が高い楽器の種類に対応する音響信号に対して相対的に抑制するための第3処理を、音響信号y_n1または音響信号y_n2に対して優先度情報P2を参照して行う。第3実施形態では、音響信号y_n1および音響信号y_n2のうち、優先度情報P2が示す優先度が低い音響信号を第3処理の対象として抑制する。音響信号y_nの優先度の高低は、例えば、音響信号y_n1に対応する種類A_n1の発音帯域と音響信号y_n2に対応する種類A_n2の発音帯域とが「低域」で同じ場合、図14の優先度情報P2の発音帯域「低域」での種類A_n1と種類A_n2との各々が表す優先度を参照する。第3実施形態では、優先度が低い音響信号y_n1に対して第3処理を実行することで、音響信号v_n1を生成する構成を例示する。なお、音響信号y_n2と、発音帯域が他の楽器と重複しない楽器の各音響信号y_nとは、そのまま音響信号v_nとして混合部343に供給される。 Specifically, the third processing unit 347 performs a third process for suppressing the acoustic signal corresponding to the instrument type having a low priority relative to the acoustic signal corresponding to the instrument type having a high priority. Is performed with reference to the priority information P2 with respect to the acoustic signal y_n1 or the acoustic signal y_n2. In the third embodiment, among the acoustic signals y_n1 and y_n2, an acoustic signal having a low priority indicated by the priority information P2 is suppressed as a target of the third process. For example, when the sound band of the type A_n1 corresponding to the sound signal y_n1 is the same as the sound band of the type A_n2 corresponding to the sound signal y_n2 in the “low range”, the priority of the sound signal y_n is the priority shown in FIG. Reference is made to the priorities represented by the types A_n1 and A_n2 in the sound generation band “low range” of the information P2. In 3rd Embodiment, the structure which produces | generates acoustic signal v_n1 by performing a 3rd process with respect to acoustic signal y_n1 with a low priority is illustrated. Note that the acoustic signal y_n2 and the acoustic signals y_n of the musical instruments whose sound generation bands do not overlap with other musical instruments are supplied as they are to the mixing unit 343 as the acoustic signals v_n.
 図15は、第3処理の説明図である。第3処理は、上述した通り、音響信号y_n1および音響信号y_n2の一方において時間軸上で他方と重なる音響信号y_n1を相対的に抑制する処理である。音響信号y_n1および音響信号y_n2の一方において時間軸上で他方と重なる場合とは、図15に例示される通り、音響信号y_n1の時間波形のピークと音響信号y_n2の時間波形のピークとが時間軸上で相互に重なる場合である。ピークが重なる場合とは、例えば双方のピークが時間軸上で一致している場合のほか、時間軸上で所定の期間内に双方のピークが位置している場合を包含する。時間波形のピークが相互に重なる場合には、音響信号y_n1が表す演奏音と音響信号y_n2が表す演奏音とが同時に発音されており、受聴者が双方の演奏音を聴取し難いという傾向がある。音響信号y_n1が表す演奏音と音響信号y_n2が表す演奏音とが同時に発音されている場合としては、例えば、楽曲にわたり継続して演奏をする(つまり発音期間が長い)弦楽器の演奏音と、楽曲中の特定の地点で完結的に演奏をする(つまり発音期間が短い)打楽器の演奏音とが同時に発音しているような場合を想定する。以上の説明から理解される通り、第3処理は、音響信号y_n1に対応する種類A_n1の発音帯域と音響信号y_n2に対応する種類A_n2の発音帯域とが同じで、かつ、音響信号y_n1の時間波形のピークと音響信号y_n2の時間波形のピークとが時間軸上で重なる場合に実行される。 FIG. 15 is an explanatory diagram of the third process. As described above, the third process is a process of relatively suppressing the acoustic signal y_n1 that overlaps the other of the acoustic signal y_n1 and the acoustic signal y_n2 on the time axis. When one of the acoustic signal y_n1 and the acoustic signal y_n2 overlaps with the other on the time axis, the time waveform peak of the acoustic signal y_n1 and the peak of the time waveform of the acoustic signal y_n2 are time axis as illustrated in FIG. This is a case where they overlap each other. The case where the peaks overlap includes, for example, a case where both peaks are coincident on the time axis and a case where both peaks are located within a predetermined period on the time axis. When the peaks of the time waveforms overlap each other, the performance sound represented by the acoustic signal y_n1 and the performance sound represented by the acoustic signal y_n2 are pronounced at the same time, and it tends to be difficult for the listener to hear both performance sounds. . As a case where the performance sound represented by the acoustic signal y_n1 and the performance sound represented by the acoustic signal y_n2 are simultaneously pronounced, for example, the performance sound of a stringed instrument that continuously plays over the music (that is, the sound generation period is long), and the music Assume that the performance sound of a percussion instrument that plays completely at a specific point in the middle (that is, the sound generation period is short) is simultaneously sounding. As understood from the above description, the third process is the same as the sound generation band of the type A_n1 corresponding to the sound signal y_n1 and the sound generation band of the type A_n2 corresponding to the sound signal y_n2, and the time waveform of the sound signal y_n1. And the peak of the time waveform of the acoustic signal y_n2 overlap each other on the time axis.
 音響信号y_n1を抑制する第3処理には、公知の技術が任意に採用され得る。例えば、音響信号y_n1の信号レベルを圧縮させるコンプレッサ処理が第3処理の好例である。図15に例示される通り、音響信号y_n1の信号レベルが閾値Zを上回る部分を圧縮することで、音響信号y_n2に対して音響信号y_n1の音響成分を相対的に抑制する。第3処理における音響信号y_n1の信号レベルに対する圧縮率(レシオ)は任意であるが、例えば閾値Zまで信号レベルを圧縮する(つまり圧縮率∞:1)構成、または、音響信号y_n1に対応する種類A_n1に応じて圧縮率を設定する構成が採用され得る。閾値Zは、実験的または統計的に選定される。なお、音響信号y_n1の信号レベルが閾値Zを下回る区間については、音響信号y_n1が音響信号v_n1として混合部343に供給される。また、例えば音響信号y_n1の信号レベルが音響信号y_n2の信号レベルを超えた時点で、音響信号y_n1に対して第3処理を行う構成も好適に採用され得る。 A well-known technique can be arbitrarily employed for the third process for suppressing the acoustic signal y_n1. For example, a compressor process that compresses the signal level of the acoustic signal y_n1 is a good example of the third process. As illustrated in FIG. 15, by compressing a portion where the signal level of the acoustic signal y_n1 exceeds the threshold Z, the acoustic component of the acoustic signal y_n1 is relatively suppressed with respect to the acoustic signal y_n2. Although the compression rate (ratio) with respect to the signal level of the acoustic signal y_n1 in the third processing is arbitrary, for example, a configuration that compresses the signal level to the threshold Z (that is, compression rate ∞: 1), or a type corresponding to the acoustic signal y_n1 A configuration in which the compression rate is set according to A_n1 may be employed. The threshold value Z is selected experimentally or statistically. Note that in a section where the signal level of the acoustic signal y_n1 is lower than the threshold Z, the acoustic signal y_n1 is supplied to the mixing unit 343 as the acoustic signal v_n1. In addition, for example, a configuration in which the third process is performed on the acoustic signal y_n1 when the signal level of the acoustic signal y_n1 exceeds the signal level of the acoustic signal y_n2 can be suitably employed.
 以上の説明から理解される通り、楽器の種類A_n1と楽器の種類A_n2との間で発音帯域が重なる場合でも、音響信号y_n2と時間軸上で重なる音響信号y_n1は、音響信号y_n2に対して相対的に抑制される。第3処理部347は、音響信号y_n1に対してコンプレッサ処理(第3処理)を行うことで、図15に例示される通り、音響信号y_n1において閾値を上回る音響成分を圧縮した音響信号v_n1を生成する。なお、図16に例示される通り、音響信号y_n1に対するコンプレッサ処理において使用される閾値(Threshold)Zおよび圧縮率(Ratio)を、音響処理装置100の表示装置(図示略)に利用者が視覚的に把握できるように表示させることも可能である。なお、図16では、閾値Zおよび圧縮率の数値に加えて、コンプレッサ処理の前後(入力/出力)の関係を示すグラフが例示されている。以上の説明から理解される通り、記憶装置24に記憶されたプログラムを演算処理装置22が実行することで、第3処理に適用されるパラメータ(例えば閾値Zおよび圧縮率)を表示装置に表示させる機能が実現される。 As understood from the above description, even when the sound generation band overlaps between the musical instrument type A_n1 and the musical instrument type A_n2, the acoustic signal y_n1 overlapping the acoustic signal y_n2 on the time axis is relative to the acoustic signal y_n2. Is suppressed. The third processing unit 347 generates the acoustic signal v_n1 by compressing the acoustic component exceeding the threshold in the acoustic signal y_n1 as illustrated in FIG. 15 by performing compressor processing (third processing) on the acoustic signal y_n1. To do. As illustrated in FIG. 16, the threshold value (Threshold) Z and the compression ratio (Ratio) used in the compressor processing for the acoustic signal y_n1 are visually displayed on the display device (not shown) of the acoustic processing device 100 by the user. It is also possible to display it so that it can be grasped. In FIG. 16, in addition to the numerical values of the threshold value Z and the compression rate, a graph showing the relationship between before and after the compressor processing (input / output) is illustrated. As understood from the above description, the arithmetic processing device 22 executes the program stored in the storage device 24, thereby causing the display device to display parameters (for example, the threshold value Z and the compression rate) applied to the third processing. Function is realized.
 図13の混合部343は、第3処理部347が生成したN個の音響信号w_1~w_Nを混合することで音響信号zを生成する。すなわち、相異なる種類のN個の楽器10_1~10_Nの演奏音の混合音を表す音響信号zが生成される。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する。 The mixing unit 343 in FIG. 13 generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the third processing unit 347. That is, an acoustic signal z representing a mixed sound of performance sounds of N different types of N musical instruments 10_1 to 10_N is generated. The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N.
 図17は、第3実施形態における音響処理装置100全体の処理のフローチャートである。各演奏者による楽器(例えばバスドラムとベースとを含む)の演奏開始を契機として、図17の処理が開始される。第1処理部341_nによる音響信号x_nを取得する処理(SA1)から音響信号y_nを生成する処理(SA5)までは第1実施形態と同様である。楽器の種類A_nを特定する処理(SA2)では、例えば、楽器10_2が生成した音響信号x_2の解析から「バスドラム」という種類が特定され、楽器10_3が生成した音響信号x_3の解析から「ベース」という種類が特定される。なお、種類A_2が示す楽器「バスドラム」の発音帯域と種類A_3が示す楽器「ベース」の発音帯域とは双方とも「低域」であり、優先度情報P2が示す「低域」における優先度は、種類A_3が示す「ベース」の優先度が種類A_2が示す「バスドラム」の優先度を上回る。その他の(N-2)個の種類A_na(na≠2,3)が示す楽器の中には、同じ発音帯域の楽器は含まれないものとする。 FIG. 17 is a flowchart of processing of the entire sound processing apparatus 100 according to the third embodiment. The process shown in FIG. 17 is started when each player performs a musical instrument (for example, bass drum and bass). The process from the process (SA1) for acquiring the acoustic signal x_n by the first processing unit 341_n to the process (SA5) for generating the acoustic signal y_n is the same as in the first embodiment. In the process (SA2) for specifying the type A_n of the instrument, for example, the type “bass drum” is specified from the analysis of the acoustic signal x_2 generated by the instrument 10_2, and “bass” is determined from the analysis of the acoustic signal x_3 generated by the instrument 10_3. Is specified. Note that the sound generation band of the musical instrument “bass drum” indicated by the type A_2 and the sound generation band of the musical instrument “bass” indicated by the type A_3 are both “low frequency”, and the priority in the “low frequency” indicated by the priority information P2 Is higher than the priority of “bass drum” indicated by the type A_2. It is assumed that instruments of the other (N-2) types A_na (na ≠ 2, 3) do not include instruments of the same tone generation band.
 第3処理部347は、発音帯域が同じ楽器の種類A_2および種類A_3の一方の種類A_2に対応する音響信号y_2に対して、第3処理を実行することで音響信号v_2を生成するとともに、音響信号y_3および発音帯域が他の楽器と重複しない楽器の音響信号y_naを、そのまま音響信号v_naとして出力する(SC1)。以上の説明から理解される通り、種類A_2が示す楽器「バスドラム」と種類A_3が示す楽器「ベース」との間で発音帯域が重なる場合でも、音響信号y_3と時間軸上で重なる音響信号y_2は、音響信号y_3に対して相対的に抑制される。調整部34(混合部343)は、第3処理部347が生成したN個の音響信号w_1~w_Nを混合することで音響信号zを生成する(SA6)。放音装置14は、音響信号zに応じた音響、つまりN個の楽器10_1~10_Nの演奏音に応じた音響を放音する(SA7)。 The third processing unit 347 generates the acoustic signal v_2 by performing the third process on the acoustic signal y_2 corresponding to one of the types A_2 and A_3 of the musical instruments having the same sound generation band, and generates the acoustic signal v_2. The signal y_3 and the sound signal y_na of the musical instrument whose sound generation band does not overlap with other musical instruments are directly output as the sound signal v_na (SC1). As understood from the above description, even when the sound generation band overlaps between the musical instrument “bass drum” indicated by the type A_2 and the musical instrument “bass” indicated by the type A_3, the acoustic signal y_2 overlapping the acoustic signal y_3 on the time axis. Is suppressed relative to the acoustic signal y_3. The adjustment unit 34 (mixing unit 343) generates the acoustic signal z by mixing the N acoustic signals w_1 to w_N generated by the third processing unit 347 (SA6). The sound emitting device 14 emits sound according to the acoustic signal z, that is, sound according to the performance sounds of the N musical instruments 10_1 to 10_N (SA7).
 第3実施形態においても第1実施形態と同様の効果が得られる。第3実施形態では特に、特定部32が特定した楽器の種類A_nに対応する周波数帯域が音響信号y_n1と音響信号y_n2との間で重なる場合に、音響信号y_n2と時間軸上で重なる音響信号y_n1の音響成分が相対的に抑制される。したがって、相対的に抑制された音響成分を含む音響信号y_n1に対応する演奏音に対して他方の音響成分を含む音響信号y_n2に対応する演奏音が聞き取りやすくなるという利点がある。また、優先度情報P2を参照して第3処理を行っているので、音響信号y_n1の調整、すなわち音響信号v_n1の生成を適切に行うことが可能である。 In the third embodiment, the same effect as in the first embodiment can be obtained. Particularly in the third embodiment, when the frequency band corresponding to the instrument type A_n specified by the specifying unit 32 overlaps between the acoustic signal y_n1 and the acoustic signal y_n2, the acoustic signal y_n1 that overlaps the acoustic signal y_n2 on the time axis. Is relatively suppressed. Therefore, there is an advantage that the performance sound corresponding to the acoustic signal y_n2 including the other acoustic component can be easily heard with respect to the performance sound corresponding to the acoustic signal y_n1 including the relatively suppressed acoustic component. Further, since the third process is performed with reference to the priority information P2, it is possible to appropriately adjust the acoustic signal y_n1, that is, generate the acoustic signal v_n1.
<変形例>
 以上に例示した各形態は多様に変形され得る。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様を適宜に併合することも可能である。
<Modification>
Each form illustrated above can be variously modified. Specific modifications are exemplified below. Two or more modes arbitrarily selected from the following examples can be appropriately combined.
(1)前述の各形態では、楽器10_nに対応する音響信号x_nを解析することで楽器の種類A_nを特定したが、楽器の種類A_nの特定方法は音響信号x_nの解析に限定されない。例えば、特定部32は、利用者が入力装置を使用して楽器を指示する操作(例えば複数の候補から楽器を選択する操作)を検知し、当該操作で指示された楽器の種類A_nを特定する。この場合、特定部32は、音響処理装置100に対して楽器の種類A_nを特定する指示が利用者の操作により可能な操作機器である。なお、音響信号x_nの解析により楽器の種類A_nを特定する前述の各形態によれば、利用者の指示により直接的に楽器の種類A_nを特定する構成と比較して、利用者の負担を軽減することが可能である。また、利用者が演奏音に対応する楽器の種類を認識していないときでも、楽器の種類A_nを特定することができるという利点がある。 (1) In each of the embodiments described above, the musical instrument type A_n is identified by analyzing the acoustic signal x_n corresponding to the musical instrument 10_n, but the identifying method of the musical instrument type A_n is not limited to the analysis of the acoustic signal x_n. For example, the specifying unit 32 detects an operation (for example, an operation for selecting a musical instrument from a plurality of candidates) in which the user uses the input device to specify a musical instrument, and specifies the type A_n of the musical instrument instructed by the operation. . In this case, the specifying unit 32 is an operating device capable of instructing the sound processing apparatus 100 to specify the musical instrument type A_n by a user operation. According to the above-described embodiments in which the instrument type A_n is specified by analyzing the acoustic signal x_n, the burden on the user is reduced as compared with the configuration in which the instrument type A_n is directly specified by a user instruction. Is possible. Further, there is an advantage that the instrument type A_n can be specified even when the user does not recognize the instrument type corresponding to the performance sound.
(2)前述の各形態では、特定部32が特定した楽器の種類A_nに応じた処理として第1処理、第2処理および第3処理を例示したが、調整部34が音響信号を調整する処理は以上の例示に限定されない。すなわち、特定部32が特定した楽器の種類A_nに対応して音響信号を調整する処理であれば処理の内容は任意である。したがって、例えば、調整部34が第1処理の後に第2処理と第3処理との双方の処理を行う構成、または、第2処理と第3処理との各々を単独で行う構成も採用され得る。 (2) In each of the above-described embodiments, the first process, the second process, and the third process are exemplified as the process according to the instrument type A_n specified by the specifying unit 32. However, the adjustment unit 34 adjusts the acoustic signal. Is not limited to the above examples. In other words, the content of the process is arbitrary as long as it is a process of adjusting the acoustic signal corresponding to the instrument type A_n specified by the specifying unit 32. Therefore, for example, a configuration in which the adjustment unit 34 performs both the second processing and the third processing after the first processing, or a configuration in which each of the second processing and the third processing is performed independently may be employed. .
(3)前述の各形態では、楽器名を楽器の種類とする構成を例示したが、楽器の種類は楽器名に限定されない。楽器の種類は音響信号が表す演奏音がどのような音響であるかを示した情報であればその内容は任意であり、例えば、調波性の有無(演奏音に調波構造が存在するか否か)または発音帯域(「低域」、「中域」または「高域」の何れか)を示す情報を楽器の種類とすることも可能である。 (3) In each of the above-described embodiments, the configuration in which the instrument name is the type of the instrument is illustrated, but the instrument type is not limited to the instrument name. The type of musical instrument is arbitrary as long as it is information indicating what kind of sound the performance sound represented by the acoustic signal is, for example, the presence or absence of harmonicity (whether there is a harmonic structure in the performance sound) Information) indicating the sound generation band (any one of “low range”, “middle range”, and “high range”) may be used as the instrument type.
(4)前述の各形態では、特定部32が特定した楽器の種類A_nに応じて音響信号x_nを調整する構成を例示したが、音響信号x_nの調整に楽器の種類A_nの信頼度を加味することも可能である。信頼度は、特定部32が特定した楽器の種類A_nの確度(識別結果の精度)である。例えば、相異なる楽器の種類について事前に用意された複数の特徴量の各々について音響信号x_nの特徴量との類似度(例えば距離または相関)を算定し、類似度が高い(距離が小さいまたは相関が大きい)楽器の種類A_nを音響信号x_nについて特定する構成では、特定部32は、類似度に応じて識別結果の信頼度を設定する。例えば、類似度を信頼度として使用する構成、または、類似度を適用した所定の演算で信頼度を算定する構成が例示される。 (4) In each of the embodiments described above, the configuration in which the acoustic signal x_n is adjusted according to the musical instrument type A_n identified by the identifying unit 32 is exemplified, but the reliability of the musical instrument type A_n is added to the adjustment of the acoustic signal x_n. It is also possible. The reliability is the accuracy of the instrument type A_n specified by the specifying unit 32 (accuracy of the identification result). For example, the similarity (for example, distance or correlation) with the feature amount of the acoustic signal x_n is calculated for each of a plurality of feature amounts prepared in advance for different instrument types, and the similarity is high (distance is small or correlated). In the configuration in which the instrument type A_n is specified for the acoustic signal x_n, the specifying unit 32 sets the reliability of the identification result according to the similarity. For example, a configuration using the similarity as the reliability, or a configuration for calculating the reliability by a predetermined calculation using the similarity is illustrated.
 調整部34は、特定部32が特定した楽器の種類A_nの信頼度によって音響信号x_nの調整の度合を制御する。例えば、音響信号x_nについて特定された種類A_nの信頼度が高いほど、当該音響信号x_nに対する調整の度合が増加する(あるいは信頼度が低いほど調整の度合が減少する)ように、調整部34は音響信号x_nの調整の度合を制御する。以上の構成では、楽器の種類A_nを特定した結果の信頼度に応じて音響信号x_nの調整が抑制される。したがって、例えば、楽器の種類A_nの信頼度が高い場合にはその楽器に好適な音響特性の付加を優先させる一方、楽器の種類A_nの信頼度が低い場合(種類A_nの特定結果が過誤である可能性が想定される場合)には、その楽器について用意された音響特性が音響信号x_nについて妥当であるとは限らないから当該音響特性の付加を抑制する、というように、楽器の種類A_nの特定結果に応じた適切な調整を実現することが可能である。また、第2実施形態と第3実施形態でも同様に、音響信号x_nに加えて音響信号y_nの調整に信頼度を加味することも可能である。 The adjusting unit 34 controls the degree of adjustment of the acoustic signal x_n according to the reliability of the instrument type A_n specified by the specifying unit 32. For example, the adjustment unit 34 may be configured such that the higher the reliability of the type A_n specified for the acoustic signal x_n, the higher the degree of adjustment for the acoustic signal x_n (or the lower the degree of adjustment, the lower the degree of adjustment). The degree of adjustment of the acoustic signal x_n is controlled. In the above configuration, the adjustment of the acoustic signal x_n is suppressed according to the reliability of the result of specifying the instrument type A_n. Therefore, for example, when the reliability of the musical instrument type A_n is high, priority is given to the addition of acoustic characteristics suitable for the musical instrument, while the reliability of the musical instrument type A_n is low (the identification result of the type A_n is erroneous). If the possibility is assumed), the acoustic characteristic prepared for the musical instrument is not necessarily valid for the acoustic signal x_n, so that the addition of the acoustic characteristic is suppressed. Appropriate adjustment according to the specific result can be realized. Similarly, in the second embodiment and the third embodiment, reliability can be added to the adjustment of the acoustic signal y_n in addition to the acoustic signal x_n.
(5)第2実施形態では、発音帯域が相互に重なる音響信号y_n1および音響信号y_n2のうちの一方である音響信号y_n1に対して第2処理を行ったが、第2処理の対象は音響信号y_n1に限定されない。例えば、音響信号y_n2に対して第2処理を行うこと、または、音響信号y_n1と音響信号y_n2との双方に対して第2処理を行うことも可能である。また、帯域B1~B3ごとに第2処理を行う音響信号y_nを変えることも可能である。以上の説明から理解される通り、優先度が低い楽器の種類に対応する音響成分を優先度が高い楽器の種類に対応する音響成分に対して相対的に抑制するための処理であれば、第2処理の対象は任意である。第3実施形態でも同様に、第3処理の対象は任意である。 (5) In the second embodiment, the second process is performed on the acoustic signal y_n1 which is one of the acoustic signal y_n1 and the acoustic signal y_n2 whose sound generation bands overlap each other. It is not limited to y_n1. For example, the second process may be performed on the acoustic signal y_n2, or the second process may be performed on both the acoustic signal y_n1 and the acoustic signal y_n2. It is also possible to change the acoustic signal y_n for performing the second process for each of the bands B1 to B3. As understood from the above description, if the processing is to suppress the acoustic component corresponding to the instrument type with low priority relative to the acoustic component corresponding to the instrument type with high priority, The target of two processes is arbitrary. Similarly, in the third embodiment, the target of the third process is arbitrary.
 また、第2実施形態および第3実施形態において、音響信号y_n1および音響信号y_n2の一方の音響成分Aを他方の音響成分Bに対して相対的に抑制する処理には、音響成分Aを音響成分Bに対して抑制する処理のほか、音響成分Bを音響成分Aに対して強調する処理も包含される。 In the second embodiment and the third embodiment, in the process of suppressing one acoustic component A of the acoustic signal y_n1 and the acoustic signal y_n2 relative to the other acoustic component B, the acoustic component A is used as the acoustic component. In addition to processing for suppressing B, processing for enhancing the acoustic component B with respect to the acoustic component A is also included.
(6)第2実施形態、音響信号y_n1の周波数帯域と音響信号y_n2の周波数帯域とが重なる場合に第2処理を行う構成を例示したが、3つ以上の音響信号y_nの周波数帯域が重なる場合においても同様に第2処理を行うことも可能である。抑制部80は、例えば3つの音響信号y_n(音響信号y_n1,音響信号y_n2および音響信号y_n3)のうちの1つまたは複数に対して第2処理を実行する。抑制部80は、音響信号y_n1と音響信号y_n2と音響信号y_n3のうち、例えば優先度が最も高い楽器の種類に対応する音響成分に対してその他の2つの楽器の種類に対応する音響成分を相対的に抑制するための第2処理を実行する。第3実施形態においても同様に、3つ以上の音響信号y_nの周波数帯域が重なる場合においても第3処理を実行することが可能である。 (6) In the second embodiment, the configuration in which the second process is performed when the frequency band of the acoustic signal y_n1 and the frequency band of the acoustic signal y_n2 are illustrated, but the frequency bands of three or more acoustic signals y_n overlap. Similarly, it is possible to perform the second process. For example, the suppressing unit 80 performs the second process on one or more of the three acoustic signals y_n (acoustic signal y_n1, acoustic signal y_n2, and acoustic signal y_n3). The suppression unit 80 makes relative the acoustic components corresponding to the other two musical instrument types relative to the acoustic component corresponding to the musical instrument type having the highest priority, for example, among the acoustic signal y_n1, the acoustic signal y_n2, and the acoustic signal y_n3. The second process is performed to suppress the failure. Similarly, in the third embodiment, it is possible to execute the third process even when the frequency bands of three or more acoustic signals y_n overlap.
(7)前述の各形態では、前記音響処理装置100を楽器10_nとは別個の装置とする構成を例示したが、楽器10_nに音響処理装置100を搭載する構成(つまり楽器10_nと音響処理装置100とを一体とする構成)も好適に採用され得る。 (7) In each of the above-described embodiments, the configuration in which the acoustic processing device 100 is separate from the musical instrument 10_n is illustrated. However, the configuration in which the acoustic processing device 100 is mounted on the musical instrument 10_n (that is, the musical instrument 10_n and the acoustic processing device 100). Can be suitably employed.
(8)前述の各形態で例示した音響処理装置100は、前述の通り演算処理装置22とプログラムとの協働により好適に実現される。具体的には、本発明の好適な態様に係るプログラムは、コンピュータに、音響信号が表す演奏音に対応する楽器の種類A[n]を特定する特定手順と、特定した楽器の種類A[n]に応じて音響信号を調整する調整手順とを実行させる。このプログラムは、コンピュータが読取可能な記録媒体に格納された形態で提供されてコンピュータにインストールされ得る。記録媒体は、例えば非一過性(non-transitory)の記録媒体であり、CD-ROM等の光学式記録媒体(光ディスク)が好例であるが、半導体記録媒体または磁気記録媒体等の公知の任意の形式の記録媒体を包含し得る。なお、非一過性の記録媒体とは、一過性の伝搬信号(transitory,propagating signal)を除く任意の記録媒体を含み、揮発性の記録媒体を除外するものではない。また、通信網を介した配信の形態でプログラムをコンピュータに提供することも可能である。また、以上に例示したプログラムは、通信網を介した配信の形態で提供されてコンピュータにインストールされ得る。 (8) The sound processing device 100 exemplified in each of the above-described embodiments is suitably realized by the cooperation of the arithmetic processing device 22 and the program as described above. Specifically, the program according to a preferred aspect of the present invention causes the computer to specify a specific procedure for specifying the musical instrument type A [n] corresponding to the performance sound represented by the acoustic signal, and the specified musical instrument type A [n. ], The adjustment procedure for adjusting the acoustic signal is executed. This program can be provided in a form stored in a computer-readable recording medium and installed in the computer. The recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included. Note that the non-transitory recording medium includes an arbitrary recording medium excluding a transient propagation signal (transitory, propagating signal) and does not exclude a volatile recording medium. It is also possible to provide a program to a computer in the form of distribution via a communication network. Further, the program exemplified above can be provided in the form of distribution via a communication network and installed in a computer.
 また、前述の各形態に係る音響処理装置100の動作方法(音響処理方法)としても本発明は特定される。本発明の好適な態様に係る音響処理方法は、音響信号が表す演奏音に対応する楽器の種類A_nを特定し、特定した楽器の種類A_nに応じて音響信号を調整する。 The present invention is also specified as an operation method (acoustic processing method) of the acoustic processing device 100 according to each of the above-described embodiments. The acoustic processing method according to a preferred aspect of the present invention specifies the musical instrument type A_n corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified musical instrument type A_n.
(9)以上に例示した形態から、例えば以下の構成が把握される。
<態様1>
 本発明の好適な態様(態様1)に係る記録媒体は、コンピュータに、音響信号が表す演奏音に対応する楽器の種類を特定する特定手順と、特定した楽器の種類に応じて音響信号を調整する調整手順とを実行させるプログラムを記録している。以上の構成では、特定した楽器の種類に応じて音響信号が調整される。したがって、音響信号に対する調整内容を利用者が指示する方法と比較して、調整内容および音響特性に関する専門的な知識を必要とせずに、音響信号を適切に調整することが可能である。
(9) From the form illustrated above, for example, the following configuration is grasped.
<Aspect 1>
A recording medium according to a preferred aspect (aspect 1) of the present invention is a computer that adjusts an acoustic signal in accordance with a specific procedure for identifying the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and the identified musical instrument type. The program for executing the adjustment procedure is recorded. In the above configuration, the acoustic signal is adjusted according to the specified type of musical instrument. Therefore, it is possible to appropriately adjust the acoustic signal without requiring specialized knowledge regarding the adjustment content and the acoustic characteristics, as compared with the method in which the user instructs the adjustment content for the acoustic signal.
<態様2>
 態様1の好適例(態様2)において、調整手順では、特定した楽器の種類に対応する目標特性に音響信号の周波数特性を近付ける第1処理を、音響信号に対して実行する。以上の構成では、音響信号の周波数特性が目標特性に近付くように(両者間の相違に応じて)各帯域通過フィルタの特性を制御する。したがって、音響信号を処理するフィルタの特性が固定された方法(音響信号の周波数特性に依存しない方法)と構成して、音響信号が既に目標特性に近い場合は、音響信号を調整する度合が小さくなる。つまり、本来の音響信号の周波数特性を生かしつつ目標特性に音響信号を近付けることが可能である
<Aspect 2>
In a preferred example (Aspect 2) of Aspect 1, in the adjustment procedure, the first process of bringing the frequency characteristic of the acoustic signal close to the target characteristic corresponding to the specified type of musical instrument is performed on the acoustic signal. With the above configuration, the characteristics of each bandpass filter are controlled so that the frequency characteristics of the acoustic signal approach the target characteristics (in accordance with the difference between the two). Therefore, when the acoustic signal is already close to the target characteristic when the acoustic signal is already close to the target characteristic with a method in which the characteristics of the filter for processing the acoustic signal are fixed (method that does not depend on the frequency characteristics of the acoustic signal), the degree of adjustment is small. Become. In other words, it is possible to bring the acoustic signal closer to the target characteristic while making use of the original frequency characteristic of the acoustic signal.
<態様3>
 態様1または態様2の好適例(態様3)において、特定手順では、相異なるチャンネルに対応する第1音響信号および第2音響信号の各々について楽器の種類を特定し、調整手順においては、特定した楽器の種類に対応する周波数帯域が第1音響信号と第2音響信号との間で重なる場合に、第1音響信号および第2音響信号の一方における当該周波数帯域の音響成分を他方に対して相対的に抑制する第2処理を実行する。以上の構成では、特定した楽器の種類に対応する周波数帯域が第1音響信号と第2音響信号との間で重なる場合に、第1音響信号および第2音響信号の一方における当該周波数帯域の音響成分が他方に対して相対的に抑制される。したがって、相対的に抑制された音響成分に対応する演奏音に対して他方の音響成分に対応する演奏音が聞き取りやすくなるという利点がある。
<Aspect 3>
In the preferred example (Aspect 3) of Aspect 1 or Aspect 2, in the specifying procedure, the type of the musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels, and in the adjustment procedure, it is specified. When the frequency band corresponding to the type of musical instrument overlaps between the first acoustic signal and the second acoustic signal, the acoustic component of the frequency band in one of the first acoustic signal and the second acoustic signal is relative to the other. The 2nd process to suppress automatically is performed. In the above configuration, when the frequency band corresponding to the specified type of musical instrument overlaps between the first acoustic signal and the second acoustic signal, the sound of the frequency band in one of the first acoustic signal and the second acoustic signal is recorded. The component is suppressed relative to the other. Therefore, there is an advantage that the performance sound corresponding to the other acoustic component can be easily heard with respect to the performance sound corresponding to the relatively suppressed acoustic component.
<態様4>
 態様1または態様2の好適例(態様4)において、特定手順では、相異なるチャンネルに対応する第1音響信号および第2音響信号の各々について楽器の種類を特定し、調整手順においては、特定した楽器の種類に対応する周波数帯域が第1音響信号と第2音響信号との間で重なる場合に、第1音響信号および第2音響信号の一方において時間軸上で他方と重なる音響成分を相対的に抑制する第3処理を実行する。以上の構成では、特定した楽器の種類に対応する周波数帯域が第1音響信号と第2音響信号との間で重なる場合に、第1音響信号および第2音響信号の一方において時間軸上で他方と重なる音響成分が相対的に抑制される。したがって、相対的に抑制された音響成分に対応する演奏音に対して他方の音響成分に対応する演奏音が聞き取りやすくなるという利点がある。
<Aspect 4>
In the preferred example (Aspect 4) of Aspect 1 or Aspect 2, in the specifying procedure, the type of the musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels, and in the adjusting procedure, the type is specified. When the frequency band corresponding to the type of musical instrument overlaps between the first acoustic signal and the second acoustic signal, the relative acoustic component that overlaps the other on the time axis in one of the first acoustic signal and the second acoustic signal The third process to be suppressed is executed. In the above configuration, when the frequency band corresponding to the specified musical instrument type overlaps between the first acoustic signal and the second acoustic signal, one of the first acoustic signal and the second acoustic signal is the other on the time axis. The acoustic component overlapping with is relatively suppressed. Therefore, there is an advantage that the performance sound corresponding to the other acoustic component can be easily heard with respect to the performance sound corresponding to the relatively suppressed acoustic component.
<態様5>
 態様3または態様4の好適例(態様5)において、調整手順では、楽器の種類毎に優先度を指定する優先度情報を参照し、第1音響信号および第2音響信号のうち、特定した楽器の種類について優先度情報が指定する優先度が低い方の音響成分を他方に対して相対的に抑制する。以上の構成では、第1音響信号および第2音響信号のうち、特定した楽器の種類について優先度情報が指定する優先度が低い方が他方に対して相対的に抑制される。したがって、優先度情報を参照せずに第1音響信号および第2音響信号の一方における音響成分が他方に対して相対的に抑制される構成と比較して、音響信号の調整をより適切に行うことが可能である。
<Aspect 5>
In a preferred example (Aspect 5) of Aspect 3 or Aspect 4, the adjustment procedure refers to priority information that designates a priority for each instrument type, and identifies the specified musical instrument out of the first acoustic signal and the second acoustic signal. The acoustic component having the lower priority specified by the priority information with respect to the type is suppressed relative to the other. In the above configuration, of the first acoustic signal and the second acoustic signal, the lower priority specified by the priority information for the specified musical instrument type is suppressed relative to the other. Therefore, the acoustic signal is adjusted more appropriately as compared with the configuration in which the acoustic component in one of the first acoustic signal and the second acoustic signal is relatively suppressed without referring to the priority information. It is possible.
<態様6>
 態様5の好適例(態様6)において、優先度情報は、周波数軸上の複数の帯域の各々について優先度を指定する。以上の構成では、優先度情報は、周波数軸上の複数の帯域の各々について優先度を指定する。したがって、音響信号の調整をより緻密に行うことができる。
<Aspect 6>
In a preferred example (Aspect 6) of Aspect 5, the priority information specifies a priority for each of a plurality of bands on the frequency axis. In the above configuration, the priority information specifies the priority for each of a plurality of bands on the frequency axis. Therefore, the acoustic signal can be adjusted more precisely.
<態様7>
 態様3から態様6の何れかの好適例(態様7)において、調整手順では、第1音響信号と第2音響信号との双方を調整する。
<Aspect 7>
In the suitable example (aspect 7) in any one of the aspects 3 to 6, the adjustment procedure adjusts both the first acoustic signal and the second acoustic signal.
<態様8>
 態様1の好適例(態様8)において、特定手順では、音響信号の解析により楽器の種類を特定する。以上の構成では、音響信号の解析により楽器の種類が特定されるから、例えば利用者からの指示に応じて楽器の種類を特定する構成と比較して、利用者の負担を軽減することが可能である。また、利用者が演奏音に対応する楽器の種類を認識していないときでも、楽器の種類を特定することができるという利点がある。
<Aspect 8>
In a preferred example (Aspect 8) of Aspect 1, in the specifying procedure, the type of musical instrument is specified by analyzing an acoustic signal. In the above configuration, since the type of musical instrument is specified by analyzing the acoustic signal, it is possible to reduce the burden on the user as compared with a configuration in which the type of musical instrument is specified according to an instruction from the user, for example. It is. Further, there is an advantage that the type of musical instrument can be specified even when the user does not recognize the type of musical instrument corresponding to the performance sound.
<態様9>
 態様8の好適例(態様9)において、調整手順では、楽器の種類を特定した結果の信頼度に応じて音響信号の調整を制御する。以上の構成では、楽器の種類を特定した結果の信頼度に応じて音響信号の調整が抑制される。したがって、例えば、楽器の種類の信頼度が高い場合にはその楽器に好適な音響特性の付加を優先させる一方、楽器の種類の信頼度が低い場合(楽器の種類の特定結果が過誤である可能性が想定される場合)には、その楽器について用意された音響特性が音響信号について妥当であるとは限らないから当該音響特性の付加を抑制する、というように、楽器の種類の特定結果に応じた適切な調整を実現することが可能である。
<Aspect 9>
In a preferred example (aspect 9) of aspect 8, in the adjustment procedure, the adjustment of the acoustic signal is controlled according to the reliability of the result of specifying the type of instrument. In the above configuration, the adjustment of the acoustic signal is suppressed according to the reliability of the result of specifying the type of musical instrument. Therefore, for example, when the reliability of a musical instrument type is high, priority is given to the addition of acoustic characteristics suitable for the musical instrument, while when the reliability of the musical instrument type is low (the instrument type identification result may be erroneous). If the sound characteristics prepared for the instrument are not necessarily valid for the sound signal, the addition of the sound characteristics is suppressed. Appropriate adjustment can be realized.
<態様10>
 態様9の好適例(態様10)において、調整手順では、信頼度が高いほど調整の度合が増加するように調整の度合を制御する。以上の構成では、信頼度が高いほど調整の度合が増加するように(信頼度が低いほど調整の度合が減少するように)調整の度合が制御される。
<Aspect 10>
In a preferred example (aspect 10) of aspect 9, in the adjustment procedure, the degree of adjustment is controlled so that the degree of adjustment increases as the reliability increases. In the above configuration, the degree of adjustment is controlled so that the degree of adjustment increases as the reliability increases (so that the degree of adjustment decreases as the reliability decreases).
<態様11>
 本発明の好適な態様(態様11)に係る音響処理装置は、音響信号が表す演奏音に対応する楽器の種類を特定する特定部と、特定部が特定した楽器の種類に応じて音響信号を調整する調整部とを具備する。
<Aspect 11>
A sound processing apparatus according to a preferred aspect (aspect 11) of the present invention specifies an acoustic signal corresponding to the type of instrument identified by the identifying unit and the identifying unit that identifies the type of instrument corresponding to the performance sound represented by the acoustic signal. And an adjusting unit for adjusting.
<態様12>
 本発明の好適な態様(態様12)に係る音響処理方法は、音響信号が表す演奏音に対応する楽器の種類を特定し、特定した楽器の種類に応じて音響信号を調整する。
<Aspect 12>
The acoustic processing method according to a preferred aspect (aspect 12) of the present invention identifies the type of musical instrument corresponding to the performance sound represented by the acoustic signal, and adjusts the acoustic signal according to the identified musical instrument type.
10……楽器、100…音響処理装置、12……信号供給装置、14……放音装置、22……演算処理装置、24……記憶装置、32……特定部、34……調整部、40……周波数解析部、50……特性付与部、60……波形生成部、70……周波数解析部、80……抑制部、90……波形生成部、341……第1処理部、343……混合部、345……第2処理部、347……第3処理部。
 
DESCRIPTION OF SYMBOLS 10 ... Musical instrument, 100 ... Sound processing device, 12 ... Signal supply device, 14 ... Sound emission device, 22 ... Arithmetic processing device, 24 ... Memory | storage device, 32 ... Specific part, 34 ... Adjustment part, 40 …… Frequency analysis unit, 50 …… Characterizing unit, 60 …… Waveform generation unit, 70 …… Frequency analysis unit, 80 …… Suppression unit, 90 …… Waveform generation unit, 341 …… First processing unit, 343 ... Mixing unit, 345 ... Second processing unit, 347 ... Third processing unit.

Claims (12)

  1.  コンピュータに、
     音響信号が表す演奏音に対応する楽器の種類を特定する特定手順と、
     前記特定した楽器の種類に応じて前記音響信号を調整する調整手順と
     を実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体。
    On the computer,
    A specific procedure for identifying the type of instrument corresponding to the performance sound represented by the acoustic signal,
    A computer-readable recording medium storing a program for executing an adjustment procedure for adjusting the acoustic signal according to the type of the specified musical instrument.
  2.  前記調整手順においては、前記特定した楽器の種類に対応する目標特性に前記音響信号の周波数特性を近付ける第1処理を、前記音響信号に対して実行する
     請求項1の記録媒体。
    The recording medium according to claim 1, wherein in the adjustment procedure, a first process is performed on the acoustic signal to bring the frequency characteristic of the acoustic signal closer to the target characteristic corresponding to the specified type of musical instrument.
  3.  前記特定手順においては、相異なるチャンネルに対応する第1音響信号および第2音響信号の各々について楽器の種類を特定し、
     前記調整手順においては、前記特定した楽器の種類に対応する周波数帯域が前記第1音響信号と前記第2音響信号との間で重なる場合に、前記第1音響信号および前記第2音響信号の一方における当該周波数帯域の音響成分を他方に対して相対的に抑制する第2処理を実行する
     請求項1または請求項2の記録媒体。
    In the specifying procedure, the type of musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels,
    In the adjustment procedure, when a frequency band corresponding to the type of the specified musical instrument overlaps between the first acoustic signal and the second acoustic signal, one of the first acoustic signal and the second acoustic signal. The recording medium of Claim 1 or Claim 2 which performs the 2nd process which suppresses relatively the acoustic component of the said frequency band with respect to the other.
  4.  前記特定手順においては、相異なるチャンネルに対応する第1音響信号および第2音響信号の各々について楽器の種類を特定し、
     前記調整手順においては、前記特定した楽器の種類に対応する周波数帯域が前記第1音響信号と前記第2音響信号との間で重なる場合に、前記第1音響信号および前記第2音響信号の一方において時間軸上で他方と重なる音響成分を相対的に抑制する第3処理を実行する
     請求項1または請求項2の記録媒体。
    In the specifying procedure, the type of musical instrument is specified for each of the first acoustic signal and the second acoustic signal corresponding to different channels,
    In the adjustment procedure, when a frequency band corresponding to the type of the specified musical instrument overlaps between the first acoustic signal and the second acoustic signal, one of the first acoustic signal and the second acoustic signal. The recording medium according to claim 1, wherein a third process for relatively suppressing an acoustic component overlapping with the other on the time axis is executed.
  5.  前記調整手順においては、前記楽器の種類毎に優先度を指定する優先度情報を参照し、前記第1音響信号および前記第2音響信号のうち、前記特定した楽器の種類について前記優先度情報が指定する優先度が低い方の音響成分を他方に対して相対的に抑制する
     請求項3または請求項4の記録媒体。
    In the adjustment procedure, the priority information for specifying the priority for each instrument type is referred to, and the priority information for the specified instrument type is selected from the first acoustic signal and the second acoustic signal. The recording medium according to claim 3 or 4, wherein the acoustic component having a lower priority to be specified is suppressed relative to the other.
  6.  前記優先度情報は、周波数軸上の複数の帯域の各々について前記優先度を指定する
     請求項5の記録媒体。
    The recording medium according to claim 5, wherein the priority information specifies the priority for each of a plurality of bands on a frequency axis.
  7.  前記調整手順においては、前記第1音響信号と前記第2音響信号との双方を調整する
     請求項3から請求項6の何れかの記録媒体。
    The recording medium according to claim 3, wherein in the adjustment procedure, both the first acoustic signal and the second acoustic signal are adjusted.
  8.  前記特定手順においては、前記音響信号の解析により前記楽器の種類を特定する
     請求項1の記録媒体。
    The recording medium according to claim 1, wherein in the specifying procedure, the type of the musical instrument is specified by analyzing the acoustic signal.
  9.  前記調整手順においては、前記楽器の種類を特定した結果の信頼度に応じて前記音響信号の調整を制御する
     請求項8の記録媒体。
    The recording medium according to claim 8, wherein in the adjustment procedure, the adjustment of the acoustic signal is controlled according to the reliability of the result of specifying the type of the musical instrument.
  10.  前記調整手順においては、前記信頼度が高いほど調整の度合が増加するように調整の度合を制御する
     請求項9の記録媒体。
    The recording medium according to claim 9, wherein in the adjustment procedure, the degree of adjustment is controlled such that the degree of adjustment increases as the reliability increases.
  11.  音響信号が表す演奏音に対応する楽器の種類を特定する特定部と、
     前記特定部が特定した楽器の種類に応じて前記音響信号を調整する調整部と
     を具備する音響処理装置。
    A specific unit for identifying the type of musical instrument corresponding to the performance sound represented by the acoustic signal;
    An acoustic processing apparatus comprising: an adjustment unit that adjusts the acoustic signal in accordance with the type of instrument specified by the specification unit.
  12.  音響信号が表す演奏音に対応する楽器の種類を特定し、
     前記特定した楽器の種類に応じて前記音響信号を調整する
     音響処理方法。
     
    Identify the type of instrument that corresponds to the performance sound represented by the acoustic signal,
    A sound processing method for adjusting the sound signal according to the type of the specified musical instrument.
PCT/JP2017/003715 2016-02-03 2017-02-02 Recording medium, acoustic processing device, and acoustic processing method WO2017135350A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016018506A JP2017139592A (en) 2016-02-03 2016-02-03 Acoustic processing method and acoustic processing apparatus
JP2016-018506 2016-02-03

Publications (1)

Publication Number Publication Date
WO2017135350A1 true WO2017135350A1 (en) 2017-08-10

Family

ID=59499609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/003715 WO2017135350A1 (en) 2016-02-03 2017-02-02 Recording medium, acoustic processing device, and acoustic processing method

Country Status (2)

Country Link
JP (1) JP2017139592A (en)
WO (1) WO2017135350A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6900461B2 (en) * 2019-04-18 2021-07-07 和秀 上森 Multi-channel speaker device
JPWO2022070769A1 (en) * 2020-09-30 2022-04-07

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010154A (en) * 2010-06-25 2012-01-12 Yamaha Corp Frequency characteristics control device
JP2013051589A (en) * 2011-08-31 2013-03-14 Univ Of Electro-Communications Mixing device, mixing signal processor, mixing program, and mixing method
JP2014197082A (en) * 2013-03-29 2014-10-16 株式会社エクシング Music instrument sound output device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010154A (en) * 2010-06-25 2012-01-12 Yamaha Corp Frequency characteristics control device
JP2013051589A (en) * 2011-08-31 2013-03-14 Univ Of Electro-Communications Mixing device, mixing signal processor, mixing program, and mixing method
JP2014197082A (en) * 2013-03-29 2014-10-16 株式会社エクシング Music instrument sound output device

Also Published As

Publication number Publication date
JP2017139592A (en) 2017-08-10

Similar Documents

Publication Publication Date Title
RU2591732C2 (en) Device and method of modifying audio signal using harmonic capture
JP4645241B2 (en) Voice processing apparatus and program
JP2012010154A (en) Frequency characteristics control device
Hill et al. A hybrid virtual bass system for optimized steady-state and transient performance
WO2017057530A1 (en) Audio processing device and audio processing method
JP4237768B2 (en) Voice processing apparatus and voice processing program
Lehtonen et al. Analysis and modeling of piano sustain-pedal effects
WO2017135350A1 (en) Recording medium, acoustic processing device, and acoustic processing method
CN114402382A (en) Information processing method, estimation model construction method, information processing device, and estimation model construction device
US10587983B1 (en) Methods and systems for adjusting clarity of digitized audio signals
JP2008072600A (en) Acoustic signal processing apparatus, acoustic signal processing program, and acoustic signal processing method
Mu et al. A timbre matching approach to enhance audio quality of psychoacoustic bass enhancement system
US10056061B1 (en) Guitar feedback emulation
JP5696828B2 (en) Signal processing device
US20230215407A1 (en) Electronic musical instrument, method of generating musical sound, and computer-readable storage medium
US11501745B1 (en) Musical instrument pickup signal processing system
JP6337698B2 (en) Sound processor
JP6409417B2 (en) Sound processor
JP7432131B1 (en) Impedance conversion device and signal processing device
WO2014142201A1 (en) Device and program for processing separating data
JP2009237590A (en) Vocal effect-providing device
US20220101820A1 (en) Signal Processing Device, Stringed Instrument, Signal Processing Method, and Program
JP5211437B2 (en) Voice processing apparatus and program
EP3920049A1 (en) Techniques for audio track analysis to support audio personalization
JP5899865B2 (en) Acoustic signal processing apparatus and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17747501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17747501

Country of ref document: EP

Kind code of ref document: A1