US20140165820A1 - Audio synthesizing systems and methods - Google Patents

Audio synthesizing systems and methods Download PDF

Info

Publication number
US20140165820A1
US20140165820A1 US14/104,810 US201314104810A US2014165820A1 US 20140165820 A1 US20140165820 A1 US 20140165820A1 US 201314104810 A US201314104810 A US 201314104810A US 2014165820 A1 US2014165820 A1 US 2014165820A1
Authority
US
United States
Prior art keywords
audio
frequency
audio source
output
filters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/104,810
Inventor
James Van Buskirk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SONIVOX LP
Original Assignee
SONIVOX LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SONIVOX LP filed Critical SONIVOX LP
Priority to US14/104,810 priority Critical patent/US20140165820A1/en
Publication of US20140165820A1 publication Critical patent/US20140165820A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/06Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour
    • G10H1/12Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour by filtering complex waveforms
    • G10H1/125Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour by filtering complex waveforms using a digital filter
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • G10H1/057Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by envelope-forming circuits
    • G10H1/0575Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by envelope-forming circuits using a data store from which the envelope is synthesized
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/311MIDI transmission
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/055Filters for musical processing or musical effects; Filter responses, filter architecture, filter coefficients or control parameters therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/131Mathematical functions for musical analysis, processing, synthesis or composition
    • G10H2250/215Transforms, i.e. mathematical transforms into domains appropriate for musical signal processing, coding or compression
    • G10H2250/235Fourier transform; Discrete Fourier Transform [DFT]; Fast Fourier Transform [FFT]

Definitions

  • Embodiments of the invention are generally related to music, audio, and other sound processing and synthesis, and are particularly related to a system and method for audio synthesis.
  • an audio processing system can be provided for the transformation of audio-band frequencies for musical and other purposes.
  • a single stream of mono, stereo, or multi-channel monophonic audio can be transformed into polyphonic music, based on a desired target musical note or set of multiple notes.
  • the system utilizes an input waveform(s) (which can be either file-based or streamed) which is then fed into an array of filters, which are themselves optionally modulated, to generate a new synthesized audio output.
  • the present invention allow for greater computational efficiency and facilitation of the synthesis of noise sound components as they combine and modulate in complex ways.
  • Additive synthesis does not have the ability to produce realistic noise components nor has it the ability for complex noise interactions, as is desirable for many types of musical sounds.
  • the input audio source can be completely unpitched and unmusical, even consisting of just pure white noise or a person's whisper, and after being synthesized by the FAA have the ability to be completely musical, with easily recognized pitch and timbre components; and the use of a real-time streamed audio input to generate the input source which is to be synthesized.
  • the frequency aperture synthesis approach allows for both file-based audio sources and real-time streamed input. The result is a completely new sound with unlimited scope because the input source itself has unlimited scope.
  • the system also allows multiple synthesis to be combined to create unique hybrid sounds, or accept input from a musical keyboard, as an additional input source to the FAA filters.
  • FIG. 1 illustrates a block-diagram view showing a 3-series-by-2-parallel array of frequency aperture cells (FACs), in accordance with an embodiment.
  • FACs frequency aperture cells
  • FIG. 2 illustrates a block-diagram view showing an n-series-by-m-parallel array of frequency aperture cells (FACs), in accordance with an embodiment.
  • FACs frequency aperture cells
  • FIG. 3 illustrates a block-diagram view an isolated frequency aperture cell (FAC) within an frequency aperture array, along with device connections, in accordance with an embodiment.
  • FAC isolated frequency aperture cell
  • FIG. 4 a illustrates a block-diagram view showing an example of a frequency aperture filter in accordance with an embodiment.
  • FIG. 4 b illustrates a block-diagram view showing another example of a frequency aperture filter in accordance with another embodiment.
  • FIG. 5 illustrates a block-diagram view showing the selection and combination block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 6 illustrates a block-diagram view showing the interpolate and process block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 7 illustrates a block-diagram view showing one example of a multi-mode filter, which may be used in FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 8 illustrates a block-diagram view showing various modulators in accordance with an embodiment.
  • FIG. 9 illustrates a block-diagram view showing the stability compensation filter of FIG. 5 in accordance with an embodiment.
  • FIG. 10 illustrates a block-diagram view showing how an audio input source into the FAA synthesizer can be modulated before entering the FAA filters, and how the FAA filters themselves can be modulated in real-time, in accordance with an embodiment.
  • FIG. 11 a illustrates a FFT spectral waveform graph view showing a slit_height of 100% in accordance with an embodiment.
  • FIG. 11 b illustrates a FFT spectral waveform graph view showing a slit_height of 50% in accordance with an embodiment.
  • FIG. 11 c illustrates a FFT spectral waveform graph view showing a slit_height of 0% in accordance with an embodiment.
  • FIG. 11 d illustrates a FFT spectral waveform graph view showing a slit_height of ⁇ 50% in accordance with an embodiment.
  • FIG. 11 e illustrates a FFT spectral waveform graph view showing a slit_height of ⁇ 100% in accordance with an embodiment.
  • FIG. 12 illustrates a FFT spectral waveform graph view showing a comparison of brown noise and pink noise as audio input in accordance with an embodiment.
  • FIG. 13 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including waveforms for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • FIG. 14 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 2-series-by-1-parallel array, including a waveform for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • FIG. 15 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including identical waveforms for audio input, waveforms for output from each FAC, each processed separately with a different FAF Type, and each showing different final waveforms for audio output in accordance with an embodiment.
  • FIGS. 1 , 17 , 18 , 19 , and 20 illustrate a series of computer screenshot views showing user controls to select parameters, such as slit_height, slit_width and other pre-sets, for use or initialization in the FACs in accordance with an embodiment.
  • Appendix A lists sets of parameters and other pre-sets to produce various example timbres in accordance with an embodiment.
  • an audio processing system can be provided for the transformation of audio-band frequencies for musical and other purposes.
  • a single stream of mono, stereo, or multi--channel monophonic audio can be transformed into polyphonic music, based on a desired target musical note or set of multiple notes.
  • the system utilizes an input waveform(s) (which can be either file-based or streamed) which is then fed into an array of filters, which are themselves optionally modulated, to generate a new synthesized audio output.
  • FIG. 1 illustrates a block-diagram view showing a 3-series-by-2-parallel array of frequency aperture cells (FACs) 110 , in accordance with an embodiment
  • FIG. 2 illustrates a block-diagram view showing an n-series-by-m-parallel array of frequency aperture cells (FACs) 110 , in accordance with an embodiment.
  • FACs frequency aperture cells
  • each array is organized into n rows by m columns, representing n successive series connections of audio processing, the output of which is then summed with m parallel rows of processing.
  • a channel of mono, stereo, or multi-channel source audio 130 feeds each row.
  • the source audio 130 may be live audio or pre-loaded from a file storage system, such as on the hard drive of a personal computer.
  • frequency aperture arrays 100 may be organized into n series by m parallel connections of frequency aperture cells, and optionally other digital filters such as multimode high pass (HP), band pass (BP), low pass (LP), or band restrict (BR) filters, or resonators of varying type, or combinations.
  • the multi-mode filter may be omitted.
  • An advantage of various embodiments of the present invention over previous techniques is how the input audio source 130 can be completely unpitched or unmusical, for example, pure white noise or a person's whisper, and after being synthesized have the ability to be musical, with recognized pitch and timbre components.
  • the output audio 140 is unlimited in its scope, and can include realistic instrument sounds such as violins, piano, brass instruments, etc., electronic sounds, sound effects, and sounds never conceived or heard before.
  • musical synthesizers have relied upon stored files (usually pitched) which consist of audio waveforms, either recorded (sample based synthesis) or algorithmically generated (frequency or amplitude modulated synthesis) to provide the audio source which is then synthesized.
  • the systems and methods disclosed herein allow the audio input 130 to be file-based audio sources, real-time streamed input, or combinations.
  • the resulting audio output 140 can be a completely new sound with unlimited scope, in part, because the input source 130 has unlimited scope.
  • the system provides advantages over prior musical synthesis, by employing arrays 100 of frequency aperture cells 110 (FAC) which contain frequency aperture filters (FAF) (See FIGS. 4 a , 4 b and accompanying text).
  • FACs 110 have the ability to transform a spectrum of related or unrelated, harmonic or inharmonic input frequencies into an arbitrary, and potentially continuously changing set of new output frequencies.
  • FACs 110 There are no constraints on the type of filter designs employed, only that they have inherent slits of harmonic or in-harmonic frequency bands that separate desired frequency components between their input and output.
  • FIR Finite Impulse Response
  • IIR Intelligent Impulse Response
  • Frequency slit spacing is a collection of harmonic and/or inharmonic frequency components, for example, harmonic partial frequency would be an example of substantially harmonic.
  • FAC 110 stages are connected in series and in parallel, and can each be modulated by specific modulation signals, such as LFO's, Envelope generators, or by the outputs of prior stages. (See FIGS.
  • Frequency spacing from the output of the FAC 110 is often not even (i.e. harmonic), hence the term “slit width” instead of “pitch” is used. “Slit width” can affect both the pitch, timbre or just one or the other, so the use of “pitch” is not appropriate in the context of an FAC 110 array.
  • each frequency aperture cell 110 in the array is comprised of its own set of modulators having separate parameters slit width, slit height and amplitude, as well as audio input, a cascade input, an audio output, transient impulse scaling, and a Frequency Aperture Filter (FAF) (See FIGS. 4 a , 4 b and accompanying text).
  • FAF Frequency Aperture Filter
  • the system also includes a dispersion algorithm which can take a pitched input source and make it unpitched and noise-like (broad spectrum). This signal then feeds into the system which further synthesizes the audio signal.
  • the input source 130 is not limited to vocalizations of course. Any pitched input source (guitar, drumset, piano, etc.) can be dispersed into broad spectrum noise and re-synthesized to produce any musical instrument output, for example, using a guitar as input, dispersing the guitar into noise, and re-synthesizing into a piano. This demonstrates how the system can use non-pitched, broad-spectrum audio with no discernible pitch and timbre; and the audio output becomes pitched, musical sounds with discernible pitch and timbre.
  • the input audio signal 130 can consist of any audio source in any format and be read in via a file-based system or streamed audio.
  • a file-based input may include just the raw PCM data or the PCM data along with initial states of the FAA filter parameters and/or modulation data.
  • the system also allows multiple synthesis to be combined to create unique hybrid sounds.
  • embodiments of the invention include a method of using multiple impulse responses, mapped out across a musical keyboard, as an additional input source to the FAA filters, designed, but not limited to, synthesizing the first moments of a sound.
  • FIG. 3 illustrates a block-diagram view showing an isolated frequency aperture cell 210 (FAC) within an frequency aperture array, along with device connections, in accordance with an embodiment.
  • the system uses an array of audio frequency aperture cells 200 , which separate noise components into harmonic and inharmonic frequency multiples.
  • Storage of control parameters 210 such as modulation and other musical controls, and source or impulse transient audio files come from a storage system 220 , such as a hard drive or other storage device.
  • a unique set of each of these files and parameters is loaded into runtime memory for each Frequency Aperture Cell 210 in the array.
  • the system may be built of software, hardware, or a combination of both. With the data packed and unpacked into interleave channels of data (e.g. RAM Stereo Circular Buffer 230 ), four channels can be processed simultaneously.
  • Each frequency aperture cell 200 produces instantaneous output frequency based on both the instantaneous spectrum of incoming audio, as well as the specific frequency slits and resonance of the aperture filter.
  • Two controlling properties are the frequency slit spacing (slit width) 240 and the noise-to-frequency band ratio, or frequency (slit height) 250 .
  • constituent FAA cells 200 are not necessarily representative of the pitch of the perceived audio output.
  • FAA cells 200 may be inharmonic themselves, or in the case of two or more series cascaded harmonic cells of differing slit width 240 , they may have their aperture slits at non-harmonic relationships, producing inharmonic transformations through cascaded harmonic cells.
  • the perceived pitch is often a complex relationship of the slit widths and heights of all constituent cells and the character of their individual harmonic and inharmonic apertures.
  • the slit width 240 and height 250 are as important to the timbre of the audio as they are to the resultant pitch.
  • this system and method are provided by employing arrays of frequency aperture cells 200 .
  • FACs 200 have the ability to transform a spectrum of related or unrelated, harmonic or inharmonic input frequencies into an arbitrary, and potentially continuously changing set of new output frequencies.
  • Both FIR (Finite Impulse Response) and IIR (Infinite Impulse Response) type designs are employed within different embodiments of the FAA types.
  • Musically interesting effects are obtained as individual frequency slit width, analogous to frequency spacing, and height, analogous to amplitude, are varied between FAC 200 stages. This demonstrates how varying the parameters between the filters in the sequence is useful.
  • FAC 200 stages are connected in series and in parallel, and can each be modulated by specific modulation signals, such as LFO's, Envelope generators, or by the outputs of prior stages. This demonstrates how to modulate the output of a filters in the sequence using the output of another filter in the sequence, for example, from another row in the array.
  • FIG. 4 a illustrates a block-diagram view showing an example of a frequency aperture filter in accordance with an embodiment
  • FIG. 4 b illustrates a block-diagram view showing another example of a frequency aperture filter in accordance with another embodiment.
  • White noise is a sound that covers the entire range of audible frequencies, all of which possess similar intensity.
  • An approximation to white noise is the static that appears between FM radio stations.
  • Pink noise contains frequencies of the audible spectrum, but with a decreasing intensity of roughly three decibels per octave. This decrease approximates the audio spectrum composite of acoustic musical instruments or ensembles.
  • At least one embodiment of the invention was inspired by the way that a prism can separate white light into it's constituent spectrum of frequencies.
  • White noise can be thought of as analogous to white light, which contains roughly equal intensities of all frequencies of visible light.
  • a prism can separate white light into it's constituent spectrum of frequencies, the resultant frequencies based on the material, internal feedback interference and spectrum of incoming light.
  • FACs frequency aperture cells
  • Another aspect of an embodiment of the invention deals with the conversion of incoming pitched sounds into wide-band audio noise spectra, while at the same time preserving the intelligibility, sibilance, or transient aspect of the original sound, then routing the sound through the array of FAC's.
  • frequency aperture filters 300 may be embodied as single or multiple digital filters of either the IIR. (infinite impulse Response) or FIR (Finite Impulse Response) type, or any combination thereof.
  • IIR infinite impulse Response
  • FIR Finite Impulse Response
  • One characteristic of the filters 300 is that both timbre and pitch are controlled by the filter parameters, and that input frequencies of adequate energies that line up with the multiple pass-bands of the filter 300 will be passed to the output of the collective filter 300 , albeit of potentially differing amplitude and phase.
  • an input impulse or other initialization energy is preloaded into a multi-channel circular buffer 310 .
  • a buffer address control block calculates successive write addresses to preload the entire circular buffer with impulse transient energy whenever, for example, a new note is depressed on the music keyboard.
  • the circular buffer arrangement allows for very efficient usage of the CPU and memory, which may reduce required amount of computer hardware resources needed to perform real-time processing of the audio synthesis.
  • the efficient usage of computer resources allows processing of the system and methods in a virtual computing environment, such as, a java virtual machine.
  • Left and Right Stereo or mono audio is demultiplexed into four channels, based on the combination type desired for the aperture spacing. This is the continuous live streaming audio that follows the impulse transient loading.
  • the read address is determined by the write address, by subtracting from it a base tuning reference value divided by the read pitch step size.
  • the base tuning reference value is calculated from the FAF 300 filter type, via lookup table or hard calculations, as different FAF 300 filter types change the overall delay through the feedback path and are therefore pitch compensated via this control.
  • the same control is deployed to the multi-mode filter in the interpolate and processing block (See FIG. 6 ), as this variable filter contributes to the overall feedback delay which contributes to the perceived pitch through the FAF 300 .
  • the read step size is calculated from the slit_width 330 input.
  • the pass bands of the filter may be determined in part by the spacing of the read and write pointers, which represent the Infinite Impulse, or feedback portion of an IIR filter design.
  • the read address in this case may have both an integer and fractional component, the later of which is used by the interpolation and processing block 320 .
  • FIG. 6 illustrates a block-diagram view showing the interpolate and process block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • the Interpolate and Process block 320 is used to lookup and calculate a value “in between” two successive buffer values at the audio sample rate.
  • the interpolation may be of any type, such as well known linear, spline, or sine(x)/x windowed interpolation.
  • quad interleave buffer, and corresponding interleave coefficient and state variable data structures four simultaneous calculations may be performed at once.
  • the block processing includes filtering for high-pass, low-pass, or other tone shaping.
  • the four interleave channels have differing, filter types and coefficients, for musicality and enhancing stereo imaging.
  • FIG. 5 illustrates a block-diagram view showing the selection and combination block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • the Selection and combination block 350 is comprised of adaptive stability compensation filtering based on the desired slit_width, slit height, and FAF Type.
  • the audio frequency components from the Interpolate and Process block 320 are combined by applying adaptive filtering as needed to attenuate the frequency bands of maximum amplitude, then mixing the harmonic-to-noise ratios together at different amplitudes.
  • FIG. 9 illustrates a block-diagram view showing the stability compensation filter of FIG. 5 in accordance with an embodiment. Shown is an example digital biquad filter, however, other types of stabilization techniques may be used. Stability compensation filtering allows for maintaining stability and harmonic purity of a recursive IIR design at relatively higher values of slit_width and slit height, which may be changing continuously in value.
  • the stability coefficients are adapted over time based on the changing values of key pitch, slit_height (harmonic/noise ratio), and slit width (frequency partial spacing). For example, higher note pitch and wider slit_width (higher partial spacing) may generally require greater attenuation of lower frequency bands in order to maintain filter stability.
  • the stability compensation filter may calculate a co-efficient of the stability filter to prevent the system from passing of unity gain.
  • a key tracker also known as a key scaler
  • the stability compensation filter may use a key tracker in its calculations to determine the desired amount of noise-to-feedback ratio.
  • the stability compensation fitter may use a key tracker to determine the desired amount of frequency slit spacing (e.g. variations on slit_width).
  • the audio is multiplexed in the output mux and combination block 360 .
  • the output multiplexing complements both-the input de-multiplexing and the selection and combination blocks to accumulate the desired output audio signal and aperture spacing character.
  • FIG. 7 illustrates a block-diagram view showing one example of a multi-mode filter, which may be seen in FIGS. 1 and 2 in accordance with an embodiment.
  • Multi-mode filters may be optionally used in frequency aperture arrays. Examples of multi-mode filters include, high pass, low pass, band pass, band restrict, and combinations. This demonstrates how multimode filters the output of each filter in the sequence using a multi-mode-filter such as a lowpass filter, highpass filter, bandpass filter, and bandreject filter.
  • FIG. 8 illustrates a block-diagram view showing various modulators in accordance with an embodiment.
  • the input audio signal itself can be subject to modulation by various methods including algorithmic means (random generators, low frequency oscillation (LFO) modulation, envelope modulation, etc.), MIDI control means (MIDI Continuous Controllers, MIDI Note messages, MIDI system messages, etc.); or physical controllers which output MIDI messages or analog voltage, as shown.
  • algorithmic means random generators, low frequency oscillation (LFO) modulation, envelope modulation, etc.
  • MIDI control means MIDI Continuous Controllers, MIDI Note messages, MIDI system messages, etc.
  • physical controllers which output MIDI messages or analog voltage, as shown.
  • Other modulation methods may be possible as well.
  • FIG. 10 illustrates a block-diagram view showing how an audio input source into the FAA synthesizer can be modulated before entering the FAA filters, and how the FAA filters themselves can be modulated in real-time, in accordance with an embodiment.
  • this shows how an audio input source into the FAA synthesizer may be modulated before entering the FAA filters. It also shows how the FAA filters themselves can be modulated in real-time.
  • the FAA synthesis can be combined with other synthesis methods, in accordance with various embodiments.
  • a console or keyboard-like application may be employed, which can be used with the system as described herein.
  • FIG. 11 a illustrates a FFT spectral waveform graph view showing a slit_height of 100% in accordance with an embodiment
  • FIG. 11 b illustrates a FFT spectral waveform graph view showing a slit_height of 50% in accordance with an embodiment
  • FIG. 11 c illustrates a FFT spectral waveform graph view showing a slit_height of 0% in accordance with an embodiment
  • FIG. 11 d illustrates a FFT spectral waveform graph view showing a slit_height of ⁇ 50% in accordance with an embodiment
  • FIG. 11 e illustrates a FFT spectral waveform graph view showing a slit_height of ⁇ 100% in accordance with an embodiment.
  • FIGS. 11 a , 11 b , 11 d , and 11 e show how the spectral waveforms change as a result of processing through a frequency aperture filter. Because slit_height is 0% in FIG. 11 c , it shows the unprocessed waveform (e.g. noise) that was use as input to the frequency aperture filter. Peaks can be seen approximately every 200 db. The first peak varies by about one octave from 100% slit_height to ⁇ 100% slit_height.
  • FIG. 12 illustrates a FFT spectral waveform graph view showing a comparison of brown noise and pink noise as audio input in accordance with an embodiment.
  • the synthesized brown noise has less energy at higher frequencies (similar to the brown noise input).
  • the pink noise has consistent energy levels at higher frequencies (similar to the pink noise input).
  • FIG. 13 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including waveforms for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • this series of waveforms brown noise and white noise are shown as input.
  • the resulting waveform is displayed.
  • the combination of the two results is shown as the parallel additive composite.
  • FIG. 14 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 2-series-by-1-parallel array, including a waveform for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • the input source is brown noise.
  • the resultant waveform is shown.
  • the final waveform is shown. This exemplifies processing of audio signals through a series of frequency aperture filters.
  • FIG. 15 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including identical waveforms for audio input, waveforms for output from each FAC, each processed separately with a different FAF Type, and each showing different final waveforms for audio output in accordance with an embodiment.
  • These waveform graphs show the differences in filter types, given the same waveform input.
  • FIGS. 16 , 17 , 18 , 19 , and 20 illustrate a series of computer screenshot views showing user controls to select parameters, such as slit_height, slit_width and other pre-sets, for use or initialization in the FACs in accordance with an embodiment.
  • This screenshots show how the user of computer software can set the slit_width, slit_height, number and type of frequency aperture cells, and other pre-sets to produce synthesized audio.
  • the slit_width i.e. the desired frequency slit spacing
  • the slit_height i.e. desired noise-to-frequency band ratio
  • the series of frequency-bands-with-noise will be generated to conform to the selection.
  • Appendix A lists sets of parameters and other pre-sets to produce various example timbres in accordance with an embodiment. These parameters and pre-sets may be available to the user of a computer or displayed on screens such as those shown in FIGS. 16 , 17 , 18 , 19 and 20 .
  • the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computers or microprocessors programmed according to the teachings of the present disclosure.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • the present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
  • the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A system and method is disclosed teach how to synthesizing audio. It allows specification of a musical sound to be generated. It synthesizes an audio source, such as noise, using parameters to specify the desired frequency slit spacing and the desired noise-to-frequency band ratio, then filtering the audio source through a sequence of filters to obtain the desired frequency slit spacing and noise to frequency band ratio. It allows modulation of the filters in the sequence. It outputs musical sound.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present application is related to utility application entitled “MUSIC SOFTWARE SYSTEMS”, filed Aug. 2, 2010, bearing attorney docket number 10#357, bearing Ser. No. 61/400,817, the contents of which are incorporated herein by this reference and are not admitted to be prior art with respect to the present invention by the mention in this cross-reference section.
  • FIELD OF INVENTION
  • Embodiments of the invention are generally related to music, audio, and other sound processing and synthesis, and are particularly related to a system and method for audio synthesis.
  • SUMMARY
  • Disclosed herein is a system and method for audio synthesizer utilizing frequency aperture cells (FAC) and frequency aperture arrays (FAA). In accordance with an embodiment, an audio processing system can be provided for the transformation of audio-band frequencies for musical and other purposes. In accordance with an embodiment, a single stream of mono, stereo, or multi-channel monophonic audio can be transformed into polyphonic music, based on a desired target musical note or set of multiple notes. The system utilizes an input waveform(s) (which can be either file-based or streamed) which is then fed into an array of filters, which are themselves optionally modulated, to generate a new synthesized audio output.
  • Previous techniques for dealing with both pitched and non-pitched audio input is known as subtractive synthesis, whereby single or multi-pole High Pass, Low Pass, Band Pass, Resonant and non-resonant filters are used to subtract certain unwanted portions from the incoming sound. In this technique, the subtractive filters usually modify the perceived timbre of the note, however the filter process does not determine the perceived pitch, except in the unusual ease of extreme filter resonance. These filters are usually of type IIR, Infinite Impulse Response, indicating a delay line and a feedback path. Others who have employed noise routed through IIR filters are Kevin Karplus, Alex Strong (1983). “Digital Synthesis of Plucked String and Drum Timbres”. Computer Music journal (MIT Press) 7 (2): 43-55. doi:10.2307/3680062, incorporated herein by reference. Although arguably also subtractive, in these previous techniques the resonance of the filter usually determines the pitch as well as it affects the timbre. There have been various improvements to these previous techniques, whereby certain filter designs are intended to emulate certain portions of their acoustic counterparts.
  • Compared to additive synthesis, the present invention allow for greater computational efficiency and facilitation of the synthesis of noise sound components as they combine and modulate in complex ways. By synthesizing groups of harmonic and inharmonic related frequencies, rather than individually synthesizing each individual frequency partial, significant computational efficiencies can be gained, and more cost effective systems can be built. Additive synthesis does not have the ability to produce realistic noise components nor has it the ability for complex noise interactions, as is desirable for many types of musical sounds.
  • Advantages of various embodiments of the present invention over previous techniques include that the input audio source can be completely unpitched and unmusical, even consisting of just pure white noise or a person's whisper, and after being synthesized by the FAA have the ability to be completely musical, with easily recognized pitch and timbre components; and the use of a real-time streamed audio input to generate the input source which is to be synthesized. The frequency aperture synthesis approach allows for both file-based audio sources and real-time streamed input. The result is a completely new sound with unlimited scope because the input source itself has unlimited scope. In accordance with an embodiment, the system also allows multiple synthesis to be combined to create unique hybrid sounds, or accept input from a musical keyboard, as an additional input source to the FAA filters. Other features and advantages will be evident from the following description.
  • BRIEF DESCRIPTION OF THE DRAWINGS:
  • FIG. 1 illustrates a block-diagram view showing a 3-series-by-2-parallel array of frequency aperture cells (FACs), in accordance with an embodiment.
  • FIG. 2 illustrates a block-diagram view showing an n-series-by-m-parallel array of frequency aperture cells (FACs), in accordance with an embodiment.
  • FIG. 3 illustrates a block-diagram view an isolated frequency aperture cell (FAC) within an frequency aperture array, along with device connections, in accordance with an embodiment.
  • FIG. 4 a illustrates a block-diagram view showing an example of a frequency aperture filter in accordance with an embodiment.
  • FIG. 4 b illustrates a block-diagram view showing another example of a frequency aperture filter in accordance with another embodiment.
  • FIG. 5 illustrates a block-diagram view showing the selection and combination block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 6 illustrates a block-diagram view showing the interpolate and process block of FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 7 illustrates a block-diagram view showing one example of a multi-mode filter, which may be used in FIGS. 4 a and 4 b in accordance with an embodiment.
  • FIG. 8 illustrates a block-diagram view showing various modulators in accordance with an embodiment.
  • FIG. 9 illustrates a block-diagram view showing the stability compensation filter of FIG. 5 in accordance with an embodiment.
  • FIG. 10 illustrates a block-diagram view showing how an audio input source into the FAA synthesizer can be modulated before entering the FAA filters, and how the FAA filters themselves can be modulated in real-time, in accordance with an embodiment.
  • FIG. 11 a illustrates a FFT spectral waveform graph view showing a slit_height of 100% in accordance with an embodiment.
  • FIG. 11 b illustrates a FFT spectral waveform graph view showing a slit_height of 50% in accordance with an embodiment.
  • FIG. 11 c illustrates a FFT spectral waveform graph view showing a slit_height of 0% in accordance with an embodiment.
  • FIG. 11 d illustrates a FFT spectral waveform graph view showing a slit_height of −50% in accordance with an embodiment.
  • FIG. 11 e illustrates a FFT spectral waveform graph view showing a slit_height of −100% in accordance with an embodiment.
  • FIG. 12 illustrates a FFT spectral waveform graph view showing a comparison of brown noise and pink noise as audio input in accordance with an embodiment.
  • FIG. 13 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including waveforms for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • FIG. 14 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 2-series-by-1-parallel array, including a waveform for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment.
  • FIG. 15 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including identical waveforms for audio input, waveforms for output from each FAC, each processed separately with a different FAF Type, and each showing different final waveforms for audio output in accordance with an embodiment.
  • FIGS. 1, 17, 18, 19, and 20 illustrate a series of computer screenshot views showing user controls to select parameters, such as slit_height, slit_width and other pre-sets, for use or initialization in the FACs in accordance with an embodiment.
  • Appendix A lists sets of parameters and other pre-sets to produce various example timbres in accordance with an embodiment.
  • DETAILED DESCRIPTION
  • Disclosed herein is a system and method for audio synthesizer utilizing frequency aperture cells (FAC) and frequency aperture arrays (FAA). In accordance with an embodiment, an audio processing system can be provided for the transformation of audio-band frequencies for musical and other purposes. In accordance with an embodiment, a single stream of mono, stereo, or multi--channel monophonic audio can be transformed into polyphonic music, based on a desired target musical note or set of multiple notes. At its core, the system utilizes an input waveform(s) (which can be either file-based or streamed) which is then fed into an array of filters, which are themselves optionally modulated, to generate a new synthesized audio output.
  • FIG. 1 illustrates a block-diagram view showing a 3-series-by-2-parallel array of frequency aperture cells (FACs) 110, in accordance with an embodiment; while FIG. 2 illustrates a block-diagram view showing an n-series-by-m-parallel array of frequency aperture cells (FACs) 110, in accordance with an embodiment. These figures show how filtering the audio source through a sequence of filters creates a series of frequency-bands-with-noise, where the first filter receives the audio source and subsequent filters receive the output of the previous filter as input, with the last filter producing audio output for the system. As shown in FIGS. 1 and 2, each array is organized into n rows by m columns, representing n successive series connections of audio processing, the output of which is then summed with m parallel rows of processing. A channel of mono, stereo, or multi-channel source audio 130 feeds each row. The source audio 130 may be live audio or pre-loaded from a file storage system, such as on the hard drive of a personal computer.
  • In accordance with an embodiment, frequency aperture arrays 100 (FAAs) may be organized into n series by m parallel connections of frequency aperture cells, and optionally other digital filters such as multimode high pass (HP), band pass (BP), low pass (LP), or band restrict (BR) filters, or resonators of varying type, or combinations. In other embodiments, the multi-mode filter may be omitted.
  • An advantage of various embodiments of the present invention over previous techniques is how the input audio source 130 can be completely unpitched or unmusical, for example, pure white noise or a person's whisper, and after being synthesized have the ability to be musical, with recognized pitch and timbre components. The output audio 140 is unlimited in its scope, and can include realistic instrument sounds such as violins, piano, brass instruments, etc., electronic sounds, sound effects, and sounds never conceived or heard before.
  • Previously, musical synthesizers have relied upon stored files (usually pitched) which consist of audio waveforms, either recorded (sample based synthesis) or algorithmically generated (frequency or amplitude modulated synthesis) to provide the audio source which is then synthesized.
  • By comparison, the systems and methods disclosed herein allow the audio input 130 to be file-based audio sources, real-time streamed input, or combinations. The resulting audio output 140 can be a completely new sound with unlimited scope, in part, because the input source 130 has unlimited scope.
  • In accordance with an embodiment, the system provides advantages over prior musical synthesis, by employing arrays 100 of frequency aperture cells 110 (FAC) which contain frequency aperture filters (FAF) (See FIGS. 4 a, 4 b and accompanying text). FACs 110 have the ability to transform a spectrum of related or unrelated, harmonic or inharmonic input frequencies into an arbitrary, and potentially continuously changing set of new output frequencies. There are no constraints on the type of filter designs employed, only that they have inherent slits of harmonic or in-harmonic frequency bands that separate desired frequency components between their input and output. Both FIR (Finite Impulse Response) and IIR (Infinite Impulse Response) type filter designs are employed within different embodiments of the FAC 110 types. In other embodiments, additive or subtractive filters may be employed. Musically interesting effects are obtained as individual frequency slit width, analogous to frequency spacing, and height, analogous to amplitude, are varied between FAC no stages. Frequency slit spacing is a collection of harmonic and/or inharmonic frequency components, for example, harmonic partial frequency would be an example of substantially harmonic. FAC 110 stages are connected in series and in parallel, and can each be modulated by specific modulation signals, such as LFO's, Envelope generators, or by the outputs of prior stages. (See FIGS. 4 a, 4 b, 8 and accompanying text.) This demonstrates how to modulate the output of a frequency aperture filter in the sequence using a modulator such as low frequency oscillator modulator, random generator modulator, envelop modulator, and MIDI control modulator,
  • Frequency spacing from the output of the FAC 110 is often not even (i.e. harmonic), hence the term “slit width” instead of “pitch” is used. “Slit width” can affect both the pitch, timbre or just one or the other, so the use of “pitch” is not appropriate in the context of an FAC 110 array.
  • In some embodiments, each frequency aperture cell 110 in the array is comprised of its own set of modulators having separate parameters slit width, slit height and amplitude, as well as audio input, a cascade input, an audio output, transient impulse scaling, and a Frequency Aperture Filter (FAF) (See FIGS. 4 a, 4 b and accompanying text).
  • Other advantages of embodiments of the present invention over previous techniques is the use of a real-time streamed audio input to generate the input source 130 which is to be synthesized. In order to facilitate pitched streamed audio input sources 130, in accordance with an embodiment, the system also includes a dispersion algorithm which can take a pitched input source and make it unpitched and noise-like (broad spectrum). This signal then feeds into the system which further synthesizes the audio signal. This allows for a unique attribute in which a person can sing, whisper, talk or vocalize into the dispersion filter, which, when fed into the system and triggered by a keyboard or other source guiding the pitch components of the system synthesizer, can yield an output that sounds like anything, including a real instrument such as a piano, guitar, drumset, etc. The input source 130 is not limited to vocalizations of course. Any pitched input source (guitar, drumset, piano, etc.) can be dispersed into broad spectrum noise and re-synthesized to produce any musical instrument output, for example, using a guitar as input, dispersing the guitar into noise, and re-synthesizing into a piano. This demonstrates how the system can use non-pitched, broad-spectrum audio with no discernible pitch and timbre; and the audio output becomes pitched, musical sounds with discernible pitch and timbre.
  • The input audio signal 130 can consist of any audio source in any format and be read in via a file-based system or streamed audio. A file-based input may include just the raw PCM data or the PCM data along with initial states of the FAA filter parameters and/or modulation data.
  • In accordance with an embodiment, the system also allows multiple synthesis to be combined to create unique hybrid sounds. Finally, embodiments of the invention include a method of using multiple impulse responses, mapped out across a musical keyboard, as an additional input source to the FAA filters, designed, but not limited to, synthesizing the first moments of a sound.
  • FIG. 3 illustrates a block-diagram view showing an isolated frequency aperture cell 210 (FAC) within an frequency aperture array, along with device connections, in accordance with an embodiment. In accordance with an embodiment, the system uses an array of audio frequency aperture cells 200, which separate noise components into harmonic and inharmonic frequency multiples. Storage of control parameters 210, such as modulation and other musical controls, and source or impulse transient audio files come from a storage system 220, such as a hard drive or other storage device. A unique set of each of these files and parameters is loaded into runtime memory for each Frequency Aperture Cell 210 in the array. The system may be built of software, hardware, or a combination of both. With the data packed and unpacked into interleave channels of data (e.g. RAM Stereo Circular Buffer 230), four channels can be processed simultaneously.
  • Each frequency aperture cell 200, with varying feedback properties, produces instantaneous output frequency based on both the instantaneous spectrum of incoming audio, as well as the specific frequency slits and resonance of the aperture filter. Two controlling properties are the frequency slit spacing (slit width) 240 and the noise-to-frequency band ratio, or frequency (slit height) 250.
  • An important distinction of constituent FAA cells 200 is that their slit widths 240 are not necessarily representative of the pitch of the perceived audio output. FAA cells 200 may be inharmonic themselves, or in the case of two or more series cascaded harmonic cells of differing slit width 240, they may have their aperture slits at non-harmonic relationships, producing inharmonic transformations through cascaded harmonic cells. The perceived pitch is often a complex relationship of the slit widths and heights of all constituent cells and the character of their individual harmonic and inharmonic apertures. The slit width 240 and height 250 are as important to the timbre of the audio as they are to the resultant pitch.
  • In accordance with an embodiment, this system and method are provided by employing arrays of frequency aperture cells 200. FACs 200 have the ability to transform a spectrum of related or unrelated, harmonic or inharmonic input frequencies into an arbitrary, and potentially continuously changing set of new output frequencies. There are no constraints on the type of filter designs employed, only that they have inherent slits of harmonic or in-harmonic frequency bands that separate desired frequency components between their input and output. Both FIR (Finite Impulse Response) and IIR (Infinite Impulse Response) type designs are employed within different embodiments of the FAA types. Musically interesting effects are obtained as individual frequency slit width, analogous to frequency spacing, and height, analogous to amplitude, are varied between FAC 200 stages. This demonstrates how varying the parameters between the filters in the sequence is useful.
  • In accordance with an embodiment, FAC 200 stages are connected in series and in parallel, and can each be modulated by specific modulation signals, such as LFO's, Envelope generators, or by the outputs of prior stages. This demonstrates how to modulate the output of a filters in the sequence using the output of another filter in the sequence, for example, from another row in the array.
  • This further demonstrates how to filter the audio source through the first filter to into a series of frequency-bands-with-noise, then suppressing high energy bands to increase feedback in the series of frequency-bands-with-noise, then re-filtering the series of frequency-bands-with-noise through a second filter; and outputting the series of frequency-bands-with-noise as audio output to produce musical sound.
  • FIG. 4 a illustrates a block-diagram view showing an example of a frequency aperture filter in accordance with an embodiment; while FIG. 4 b illustrates a block-diagram view showing another example of a frequency aperture filter in accordance with another embodiment. These figures show how selected parameters to specify the desired frequency slit spacing and the desired noise-to-frequency band ratio can be used to filter and conform conforming the series of frequency-bands-with-noise to the parameters to produce the desired frequency slit spacing and the desired noise-to-frequency band ratio.
  • Before discussing frequency aperture filters, some analogous inspiration may help understanding. White noise is a sound that covers the entire range of audible frequencies, all of which possess similar intensity. An approximation to white noise is the static that appears between FM radio stations. Pink noise contains frequencies of the audible spectrum, but with a decreasing intensity of roughly three decibels per octave. This decrease approximates the audio spectrum composite of acoustic musical instruments or ensembles.
  • At least one embodiment of the invention was inspired by the way that a prism can separate white light into it's constituent spectrum of frequencies. White noise can be thought of as analogous to white light, which contains roughly equal intensities of all frequencies of visible light. A prism can separate white light into it's constituent spectrum of frequencies, the resultant frequencies based on the material, internal feedback interference and spectrum of incoming light.
  • Among other factors, frequency aperture cells (FACs) (See FIG. 3 and accompanying text) do analogously with audio, based on their type, feedback properties, and the spectrum of incoming audio. Another aspect of an embodiment of the invention deals with the conversion of incoming pitched sounds into wide-band audio noise spectra, while at the same time preserving the intelligibility, sibilance, or transient aspect of the original sound, then routing the sound through the array of FAC's.
  • In accordance with an embodiment, frequency aperture filters 300 (FAF) may be embodied as single or multiple digital filters of either the IIR. (infinite impulse Response) or FIR (Finite Impulse Response) type, or any combination thereof. One characteristic of the filters 300 is that both timbre and pitch are controlled by the filter parameters, and that input frequencies of adequate energies that line up with the multiple pass-bands of the filter 300 will be passed to the output of the collective filter 300, albeit of potentially differing amplitude and phase.
  • In one example embodiment, an input impulse or other initialization energy is preloaded into a multi-channel circular buffer 310. A buffer address control block calculates successive write addresses to preload the entire circular buffer with impulse transient energy whenever, for example, a new note is depressed on the music keyboard.
  • The circular buffer arrangement allows for very efficient usage of the CPU and memory, which may reduce required amount of computer hardware resources needed to perform real-time processing of the audio synthesis. In other embodiments, the efficient usage of computer resources allows processing of the system and methods in a virtual computing environment, such as, a java virtual machine.
  • In accordance with an embodiment, Left and Right Stereo or mono audio is demultiplexed into four channels, based on the combination type desired for the aperture spacing. This is the continuous live streaming audio that follows the impulse transient loading.
  • After that, continuous, successive write addresses are generated by the buffer address control for incoming combined input samples, as well as for successive read addresses for outgoing samples into the Interpolation and Processing block 320 (See also FIG. 6).
  • In one example buffer address calculation, the read address is determined by the write address, by subtracting from it a base tuning reference value divided by the read pitch step size. The base tuning reference value is calculated from the FAF 300 filter type, via lookup table or hard calculations, as different FAF 300 filter types change the overall delay through the feedback path and are therefore pitch compensated via this control. The same control is deployed to the multi-mode filter in the interpolate and processing block (See FIG. 6), as this variable filter contributes to the overall feedback delay which contributes to the perceived pitch through the FAF 300. The read step size is calculated from the slit_width 330 input. The pass bands of the filter may be determined in part by the spacing of the read and write pointers, which represent the Infinite Impulse, or feedback portion of an IIR filter design. The read address in this case may have both an integer and fractional component, the later of which is used by the interpolation and processing block 320.
  • Looking ahead to FIG. 6 illustrates a block-diagram view showing the interpolate and process block of FIGS. 4 a and 4 b in accordance with an embodiment. In accordance with an embodiment, the Interpolate and Process block 320 is used to lookup and calculate a value “in between” two successive buffer values at the audio sample rate. The interpolation may be of any type, such as well known linear, spline, or sine(x)/x windowed interpolation. By virtue of the quad interleave buffer, and corresponding interleave coefficient and state variable data structures, four simultaneous calculations may be performed at once. In addition to interpolation, the block processing includes filtering for high-pass, low-pass, or other tone shaping. The four interleave channels have differing, filter types and coefficients, for musicality and enhancing stereo imaging. In addition, there may be multiple types of interpolation needed at once, one to resolve the audio sample rate range via up-sampling and down-sampling, and one to resolve the desired slit_width.
  • Turning back to FIG. 5 illustrates a block-diagram view showing the selection and combination block of FIGS. 4 a and 4 b in accordance with an embodiment. The Selection and combination block 350 is comprised of adaptive stability compensation filtering based on the desired slit_width, slit height, and FAF Type. The audio frequency components from the Interpolate and Process block 320 are combined by applying adaptive filtering as needed to attenuate the frequency bands of maximum amplitude, then mixing the harmonic-to-noise ratios together at different amplitudes.
  • Turning ahead to FIG. 9 illustrates a block-diagram view showing the stability compensation filter of FIG. 5 in accordance with an embodiment. Shown is an example digital biquad filter, however, other types of stabilization techniques may be used. Stability compensation filtering allows for maintaining stability and harmonic purity of a recursive IIR design at relatively higher values of slit_width and slit height, which may be changing continuously in value. The stability coefficients are adapted over time based on the changing values of key pitch, slit_height (harmonic/noise ratio), and slit width (frequency partial spacing). For example, higher note pitch and wider slit_width (higher partial spacing) may generally require greater attenuation of lower frequency bands in order to maintain filter stability.
  • The stability compensation filter may calculate a co-efficient of the stability filter to prevent the system from passing of unity gain. A key tracker (also known as a key scaler) scales the incoming musical note key according to linear or nonlinear functions which may be of simple tabular form. The stability compensation filter may use a key tracker in its calculations to determine the desired amount of noise-to-feedback ratio. The stability compensation fitter may use a key tracker to determine the desired amount of frequency slit spacing (e.g. variations on slit_width).
  • Again on FIGS. 4 a and 4 b, after interpolation and processing 320, the audio is multiplexed in the output mux and combination block 360. The output multiplexing complements both-the input de-multiplexing and the selection and combination blocks to accumulate the desired output audio signal and aperture spacing character.
  • FIG. 7 illustrates a block-diagram view showing one example of a multi-mode filter, which may be seen in FIGS. 1 and 2 in accordance with an embodiment. Multi-mode filters may be optionally used in frequency aperture arrays. Examples of multi-mode filters include, high pass, low pass, band pass, band restrict, and combinations. This demonstrates how multimode filters the output of each filter in the sequence using a multi-mode-filter such as a lowpass filter, highpass filter, bandpass filter, and bandreject filter.
  • FIG. 8 illustrates a block-diagram view showing various modulators in accordance with an embodiment. The input audio signal itself can be subject to modulation by various methods including algorithmic means (random generators, low frequency oscillation (LFO) modulation, envelope modulation, etc.), MIDI control means (MIDI Continuous Controllers, MIDI Note messages, MIDI system messages, etc.); or physical controllers which output MIDI messages or analog voltage, as shown. Other modulation methods may be possible as well.
  • FIG. 10 illustrates a block-diagram view showing how an audio input source into the FAA synthesizer can be modulated before entering the FAA filters, and how the FAA filters themselves can be modulated in real-time, in accordance with an embodiment. In particular, this shows how an audio input source into the FAA synthesizer may be modulated before entering the FAA filters. It also shows how the FAA filters themselves can be modulated in real-time. In some embodiments the FAA synthesis can be combined with other synthesis methods, in accordance with various embodiments. In some embodiments, a console or keyboard-like application may be employed, which can be used with the system as described herein.
  • FIG. 11 a illustrates a FFT spectral waveform graph view showing a slit_height of 100% in accordance with an embodiment; FIG. 11 b illustrates a FFT spectral waveform graph view showing a slit_height of 50% in accordance with an embodiment; FIG. 11 c illustrates a FFT spectral waveform graph view showing a slit_height of 0% in accordance with an embodiment; FIG. 11 d illustrates a FFT spectral waveform graph view showing a slit_height of −50% in accordance with an embodiment; and FIG. 11 e illustrates a FFT spectral waveform graph view showing a slit_height of −100% in accordance with an embodiment. Taken together, FIGS. 11 a, 11 b, 11 d, and 11 e show how the spectral waveforms change as a result of processing through a frequency aperture filter. Because slit_height is 0% in FIG. 11 c, it shows the unprocessed waveform (e.g. noise) that was use as input to the frequency aperture filter. Peaks can be seen approximately every 200 db. The first peak varies by about one octave from 100% slit_height to −100% slit_height.
  • FIG. 12 illustrates a FFT spectral waveform graph view showing a comparison of brown noise and pink noise as audio input in accordance with an embodiment. In this graph, it can be seen that the synthesized brown noise has less energy at higher frequencies (similar to the brown noise input). By comparison, the pink noise has consistent energy levels at higher frequencies (similar to the pink noise input).
  • FIG. 13 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including waveforms for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment. In this series of waveforms, brown noise and white noise are shown as input. After processing through a frequency aperture cell, the resulting waveform is displayed. Finally, the combination of the two results is shown as the parallel additive composite.
  • FIG. 14 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 2-series-by-1-parallel array, including a waveform for audio input, waveforms for output from each FAC, and a final waveform for audio output in accordance with an embodiment. In this series of waveforms, the input source is brown noise. After processing through the first FAF (of Type4_turbo), the resultant waveform is shown. After processing through a second FAF (of Type1_normal), the final waveform is shown. This exemplifies processing of audio signals through a series of frequency aperture filters.
  • FIG. 15 illustrates a FFT spectral waveform graph view showing a series of waveforms in a 1-series-by-2-parallel array, including identical waveforms for audio input, waveforms for output from each FAC, each processed separately with a different FAF Type, and each showing different final waveforms for audio output in accordance with an embodiment. These waveform graphs show the differences in filter types, given the same waveform input.
  • FIGS. 16, 17, 18, 19, and 20 illustrate a series of computer screenshot views showing user controls to select parameters, such as slit_height, slit_width and other pre-sets, for use or initialization in the FACs in accordance with an embodiment. This screenshots show how the user of computer software can set the slit_width, slit_height, number and type of frequency aperture cells, and other pre-sets to produce synthesized audio. The slit_width (i.e. the desired frequency slit spacing) and the slit_height (i.e. desired noise-to-frequency band ratio) may be selected to produce a specific fibre or other musical quality. Then during filter, the series of frequency-bands-with-noise will be generated to conform to the selection.
  • Appendix A lists sets of parameters and other pre-sets to produce various example timbres in accordance with an embodiment. These parameters and pre-sets may be available to the user of a computer or displayed on screens such as those shown in FIGS. 16, 17, 18, 19 and 20.
  • The above-described systems and methods can be used in accordance with various embodiments to provide a number of different applications, including but not limited to:
      • A system and method that can synthesize pitched, musical sounds from non-pitched, broad-spectrum audio.
      • A system and method of combining and arranging frequency aperture cells for extreme efficiency of processing and memory.
      • A system and method of transforming audio with discernible pitch and timbre into broad-spectrum noise with no discernible pitch and timbre.
      • A system and method for combining the above synthesis with other synthesis methods to create hybrid synthesizers.
      • A system and method for modulating individual components of the system using MIDI, algorithmic or physical controllers.
      • A system and method for using real-time, streamed audio as an input audio source for the above synthesizer.
      • A system and method for vocalizing into the above synthesizer while playing MIDI and having the vocalization re--pitched and harmonized.
      • A system and method for inputting any musical audio source, whether file-based or streamed, and re-pitching it and re-harmonizing.
      • A system and method for vocalizing into the above synthesizer while playing MIDI and having the synthesizer play a recognizable musical instrument.
  • The present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computers or microprocessors programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • In some embodiments, the present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
  • There are a total of 17 source code files incorporated by reference to an earlier application. Further, many other advantages of applicant's invention will be apparent to those skilled in the art from the computer software source code and included screen shots.
  • A portion of the disclosure of this patent document contains material which is subject to copyright protection; i.e. Copyright 2010 James Van Buskirk (17 U.S.C. 401). The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
  • The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

Claims (21)

1-25. (canceled)
26. A method for synthesizing audio to produce a musical sound, comprising the steps of:
receiving an audio source;
setting parameters of at least two filters to specify a desired frequency slit spacing and the desired noise-to-frequency-band ratio, wherein the frequency slit spacing for each filter corresponds to one of harmonic frequency bands, inharmonic frequency bands, or a combination of harmonic and inharmonic frequency bands;
filtering a signal based on the audio source through a sequence of the at least two filters to filter the audio source into a series of harmonic, inharmonic, or a combination of harmonic and inharmonic frequency-bands-with-noise;
outputting an audio output to produce musical sound.
27. The method of claim 1, wherein the audio source comprises pitched sounds, the method further comprising:
converting the pitched sounds of the audio source into wide-band audio noise spectra, wherein intelligibility, sibilance, or transient aspects of the incoming audio is preserved during the conversion of the pitched sounds; and
performing the filtering on the converted audio source to produce the musical sound.
28. The method of claim 1, wherein the audio source comprises sounds of a first instrument, the method further comprising:
converting the pitched sounds of the audio source into wide-band audio noise spectra; and
performing the filtering on the converted audio source so that the audio output sounds like a second instrument.
29. The method of claim 1, wherein:
the filtering of the signal based on the audio source changes a first pitch in the audio source to a different second pitch in the output audio.
30. The method of claim 1, wherein the audio source comprises norm pitched sounds, the method further comprising:
modulating an output of one of the at least two filters, wherein modulation is triggered by an external source that provides pitch information; and
outputting the audio output to produce the musical sound based on the modulation.
31. The method of claim 5, wherein:
the modulation is triggered by a keyboard.
32. The method of claim 5, wherein:
the audio source comprises talking, whispering, or vocalization; and
the output audio resembles a musical instrument, based on the modulation.
33. The method of claim 1, wherein the audio source comprises pitched sounds, the method further comprising:
converting the pitched sounds of the audio source into wide-band audio noise spectra;
performing the filtering on the converted audio source;
modulating an output of one of the at least two filters, wherein modulation is triggered by an external source that provides pitch information; and
outputting the audio output to produce the musical sound based on the modulation.
34. The method of claim 1, wherein:
the frequency slit spacing for one of the at least two filters corresponds to harmonic frequency bands.
35. The method of claim 1, wherein:
the frequency slit spacing for one of the at least two filters corresponds to inharmonic frequency bands.
36. A system for synthesizing audio to produce a musical sound, comprising:
an input interface for receiving an audio source;
two or more frequency aperture cells with configurable parameters corresponding to a desired frequency slit spacing and a desired noise-to-frequency-band ratio and configured to filter a signal based on the audio source into a series of harmonic, inharmonic, or a combination of harmonic and inharmonic frequency-bands-with-noise; and
an output interface for outputting an audio output to produce musical sound.
37. The system of claim 11, wherein the audio source comprises pitched sounds, the system further comprising:
a dispersion module for converting the pitched sounds of the audio source into wide-band audio noise spectra, wherein intelligibility, sibilance, or transient aspects of the incoming audio is preserved during the conversion of the pitched sounds; and
wherein the frequency aperture cells filter the converted audio source to produce the musical sound,
38. The system of claim 11, wherein the audio source comprises sounds of a first instrument, the system further comprising:
a dispersion module for converting the pitched sounds of the audio source into wide-band audio noise spectra; and
wherein the frequency aperture cells filter the converted audio source so that the audio output sounds like a second instrument.
39. The system of claim 11, wherein:
the filtering of the signal based on the audio source changes a first pitch in the audio source to a different second pitch in the output audio.
40. The system of claim 11, wherein the audio source comprises non pitched sounds, the system further comprising:
an external source that provides pitch information and is configured to provide a trigger for modulating an output of one of the at least two filters; and
wherein outputting the audio output to produce the musical sound is based on the modulation.
41. The system of claim 15, wherein:
the external source that triggers modulation is a keyboard.
42. The system of claim 15, wherein:
the audio source comprises talking, whispering, or vocalization; and
the output audio resembles a musical instrument, based on the modulation of the output of one of the at least two filters.
43. The system of claim 11, wherein the audio source comprises pitched sounds, the system further comprising:
a dispersion module for converting the pitched sounds of the audio source into wide-band audio noise spectra, and wherein the frequency aperture cells filter the converted audio source;
an external source that provides pitch information and is configured to provide a trigger for modulating an output of one of the at least two filters; and
wherein outputting the audio output to produce the musical sound is based on the modulation.
44. The system of claim 11, wherein:
the frequency slit spacing for one of the at least two filters corresponds to harmonic frequency bands.
45. The system of claim 11, wherein:
the frequency slit spacing for one of the at least two filters corresponds to inharmonic frequency hands.
US14/104,810 2011-08-02 2013-12-12 Audio synthesizing systems and methods Abandoned US20140165820A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/104,810 US20140165820A1 (en) 2011-08-02 2013-12-12 Audio synthesizing systems and methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/196,690 US8653354B1 (en) 2011-08-02 2011-08-02 Audio synthesizing systems and methods
US14/104,810 US20140165820A1 (en) 2011-08-02 2013-12-12 Audio synthesizing systems and methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/196,690 Continuation US8653354B1 (en) 2011-08-02 2011-08-02 Audio synthesizing systems and methods

Publications (1)

Publication Number Publication Date
US20140165820A1 true US20140165820A1 (en) 2014-06-19

Family

ID=50072130

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/196,690 Active 2031-08-14 US8653354B1 (en) 2011-08-02 2011-08-02 Audio synthesizing systems and methods
US14/104,810 Abandoned US20140165820A1 (en) 2011-08-02 2013-12-12 Audio synthesizing systems and methods

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/196,690 Active 2031-08-14 US8653354B1 (en) 2011-08-02 2011-08-02 Audio synthesizing systems and methods

Country Status (1)

Country Link
US (2) US8653354B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005617A1 (en) * 2015-03-20 2018-01-04 Yamaha Corporation Sound control device, sound control method, and sound control program

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8653354B1 (en) * 2011-08-02 2014-02-18 Sonivoz, L.P. Audio synthesizing systems and methods
US9736227B2 (en) 2013-10-29 2017-08-15 Lantronix, Inc. Data capture on a serial device
KR102517867B1 (en) 2015-08-25 2023-04-05 돌비 레버러토리즈 라이쎈싱 코오포레이션 Audio decoders and decoding methods
JP6724828B2 (en) * 2017-03-15 2020-07-15 カシオ計算機株式会社 Filter calculation processing device, filter calculation method, and effect imparting device
JP6805060B2 (en) * 2017-04-17 2020-12-23 株式会社河合楽器製作所 Resonance sound control device and localization control method for resonance sound

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5917919A (en) * 1995-12-04 1999-06-29 Rosenthal; Felix Method and apparatus for multi-channel active control of noise or vibration or of multi-channel separation of a signal from a noisy environment
US6104822A (en) * 1995-10-10 2000-08-15 Audiologic, Inc. Digital signal processing hearing aid
US20030063759A1 (en) * 2001-08-08 2003-04-03 Brennan Robert L. Directional audio signal processing using an oversampled filterbank
US20030108214A1 (en) * 2001-08-07 2003-06-12 Brennan Robert L. Sub-band adaptive signal processing in an oversampled filterbank
US20080137874A1 (en) * 2005-03-21 2008-06-12 Markus Christoph Audio enhancement system and method
US7518053B1 (en) * 2005-09-01 2009-04-14 Texas Instruments Incorporated Beat matching for portable audio
US20090312820A1 (en) * 2008-06-02 2009-12-17 University Of Washington Enhanced signal processing for cochlear implants
US20100131086A1 (en) * 2007-04-13 2010-05-27 Kyoto University Sound source separation system, sound source separation method, and computer program for sound source separation
US20120166187A1 (en) * 2010-08-31 2012-06-28 Sonic Network, Inc. System and method for audio synthesizer utilizing frequency aperture arrays
US20130336509A1 (en) * 2012-06-15 2013-12-19 Starkey Laboratories, Inc. Frequency translation in hearing assistance devices using additive spectral synthesis
US8653354B1 (en) * 2011-08-02 2014-02-18 Sonivoz, L.P. Audio synthesizing systems and methods
US20140278385A1 (en) * 2013-03-13 2014-09-18 Kopin Corporation Noise Cancelling Microphone Apparatus
US20150133812A1 (en) * 2012-01-09 2015-05-14 Richard Christopher DeCharms Methods and systems for quantitative measurement of mental states

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4185531A (en) 1977-06-24 1980-01-29 Oberheim Electronics, Inc. Music synthesizer programmer
US4649783A (en) 1983-02-02 1987-03-17 The Board Of Trustees Of The Leland Stanford Junior University Wavetable-modification instrument and method for generating musical sound
US4988960A (en) 1988-12-21 1991-01-29 Yamaha Corporation FM demodulation device and FM modulation device employing a CMOS signal delay device
US5524057A (en) * 1992-06-19 1996-06-04 Alpine Electronics Inc. Noise-canceling apparatus
US5841387A (en) 1993-09-01 1998-11-24 Texas Instruments Incorporated Method and system for encoding a digital signal
US8085959B2 (en) * 1994-07-08 2011-12-27 Brigham Young University Hearing compensation system incorporating signal processing techniques
US5684260A (en) 1994-09-09 1997-11-04 Texas Instruments Incorporated Apparatus and method for generation and synthesis of audio
US5811706A (en) 1997-05-27 1998-09-22 Rockwell Semiconductor Systems, Inc. Synthesizer system utilizing mass storage devices for real time, low latency access of musical instrument digital samples
US5890125A (en) * 1997-07-16 1999-03-30 Dolby Laboratories Licensing Corporation Method and apparatus for encoding and decoding multiple audio channels at low bit rates using adaptive selection of encoding method
SE0001926D0 (en) * 2000-05-23 2000-05-23 Lars Liljeryd Improved spectral translation / folding in the subband domain
US8098844B2 (en) * 2002-02-05 2012-01-17 Mh Acoustics, Llc Dual-microphone spatial noise suppression
US20030187663A1 (en) * 2002-03-28 2003-10-02 Truman Michael Mead Broadband frequency translation for high frequency regeneration
US6982377B2 (en) * 2003-12-18 2006-01-03 Texas Instruments Incorporated Time-scale modification of music signals based on polyphase filterbanks and constrained time-domain processing
US8041046B2 (en) * 2004-06-30 2011-10-18 Pioneer Corporation Reverberation adjusting apparatus, reverberation adjusting method, reverberation adjusting program, recording medium on which the reverberation adjusting program is recorded, and sound field correcting system
US7433463B2 (en) * 2004-08-10 2008-10-07 Clarity Technologies, Inc. Echo cancellation and noise reduction method
JP4631939B2 (en) * 2008-06-27 2011-02-16 ソニー株式会社 Noise reducing voice reproducing apparatus and noise reducing voice reproducing method
JP5439118B2 (en) * 2008-11-14 2014-03-12 パナソニック株式会社 Noise control device
US9100734B2 (en) * 2010-10-22 2015-08-04 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for far-field multi-source tracking and separation
US9031268B2 (en) * 2011-05-09 2015-05-12 Dts, Inc. Room characterization and correction for multi-channel audio

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104822A (en) * 1995-10-10 2000-08-15 Audiologic, Inc. Digital signal processing hearing aid
US5917919A (en) * 1995-12-04 1999-06-29 Rosenthal; Felix Method and apparatus for multi-channel active control of noise or vibration or of multi-channel separation of a signal from a noisy environment
US20030108214A1 (en) * 2001-08-07 2003-06-12 Brennan Robert L. Sub-band adaptive signal processing in an oversampled filterbank
US20030063759A1 (en) * 2001-08-08 2003-04-03 Brennan Robert L. Directional audio signal processing using an oversampled filterbank
US20080137874A1 (en) * 2005-03-21 2008-06-12 Markus Christoph Audio enhancement system and method
US7518053B1 (en) * 2005-09-01 2009-04-14 Texas Instruments Incorporated Beat matching for portable audio
US20100131086A1 (en) * 2007-04-13 2010-05-27 Kyoto University Sound source separation system, sound source separation method, and computer program for sound source separation
US20090312820A1 (en) * 2008-06-02 2009-12-17 University Of Washington Enhanced signal processing for cochlear implants
US20120166187A1 (en) * 2010-08-31 2012-06-28 Sonic Network, Inc. System and method for audio synthesizer utilizing frequency aperture arrays
US8653354B1 (en) * 2011-08-02 2014-02-18 Sonivoz, L.P. Audio synthesizing systems and methods
US20150133812A1 (en) * 2012-01-09 2015-05-14 Richard Christopher DeCharms Methods and systems for quantitative measurement of mental states
US20130336509A1 (en) * 2012-06-15 2013-12-19 Starkey Laboratories, Inc. Frequency translation in hearing assistance devices using additive spectral synthesis
US20140278385A1 (en) * 2013-03-13 2014-09-18 Kopin Corporation Noise Cancelling Microphone Apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005617A1 (en) * 2015-03-20 2018-01-04 Yamaha Corporation Sound control device, sound control method, and sound control program
US10354629B2 (en) * 2015-03-20 2019-07-16 Yamaha Corporation Sound control device, sound control method, and sound control program

Also Published As

Publication number Publication date
US8653354B1 (en) 2014-02-18

Similar Documents

Publication Publication Date Title
US8759661B2 (en) System and method for audio synthesizer utilizing frequency aperture arrays
US20140165820A1 (en) Audio synthesizing systems and methods
US8229135B2 (en) Audio enhancement method and system
US3007361A (en) Multiple vibrato system
JP4286510B2 (en) Acoustic signal processing apparatus and method
US7649135B2 (en) Sound synthesis
US9515630B2 (en) Musical dynamics alteration of sounds
CN1230273A (en) Reduced-memory reverberation simulator in sound synthesizer
Jehan et al. An audio-driven perceptually meaningful timbre synthesizer
US5401897A (en) Sound synthesis process
KR101489035B1 (en) Method and apparatus for processing audio signals
Creasey Audio Processes: Musical Analysis, Modification, Synthesis, and Control
US3147333A (en) Audio modulation system
JP4303026B2 (en) Acoustic signal processing apparatus and method
US20060217984A1 (en) Critical band additive synthesis of tonal audio signals
US20080212784A1 (en) Parametric Multi-Channel Decoding
JP4096973B2 (en) Waveform signal generation method, waveform signal generation apparatus, and recording medium
JP2022506838A (en) Overtone generation in audio systems
WO2023170756A1 (en) Acoustic processing method, acoustic processing system, and program
EP4216205A1 (en) Electronic musical instrument, method of generating musical sound, and program
JP2836116B2 (en) Sound processing device
JPH01257898A (en) Electronic musical instrument
JP3525482B2 (en) Sound source device
Brooker Final Written Review-High and Low Shelf Tone Shaper
JP3278884B2 (en) Electronic musical instrument tone control device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION