EP0882286B1 - Pc audio system with frequency compensated wavetable data - Google Patents

Pc audio system with frequency compensated wavetable data Download PDF

Info

Publication number
EP0882286B1
EP0882286B1 EP97907795A EP97907795A EP0882286B1 EP 0882286 B1 EP0882286 B1 EP 0882286B1 EP 97907795 A EP97907795 A EP 97907795A EP 97907795 A EP97907795 A EP 97907795A EP 0882286 B1 EP0882286 B1 EP 0882286B1
Authority
EP
European Patent Office
Prior art keywords
data
patch
frequency
memory
wavetable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP97907795A
Other languages
German (de)
French (fr)
Other versions
EP0882286A1 (en
Inventor
Larry Hewitt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Micro Devices Inc
Original Assignee
Advanced Micro Devices Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Micro Devices Inc filed Critical Advanced Micro Devices Inc
Publication of EP0882286A1 publication Critical patent/EP0882286A1/en
Application granted granted Critical
Publication of EP0882286B1 publication Critical patent/EP0882286B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/002Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof
    • G10H7/004Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof with one or more auxiliary processor in addition to the main processing unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/02Instruments in which the tones are synthesised from a data store, e.g. computer organs in which amplitudes at successive sample points of a tone waveform are stored in one or more memories
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/025Computing or signal processing architecture features
    • G10H2230/031Use of cache memory for electrophonic musical instrument processes, e.g. for improving processing capabilities or solving interfacing problems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/201Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
    • G10H2240/275Musical interface to a personal computer PCI bus, "peripheral component interconnect bus"

Definitions

  • This invention relates to a PC audio system including a wavetable audio synthesizer and a memory which supplies frequency compensated wavetable data. More particularly, this invention relates to a PC audio system, including a wavetable audio synthesizer and wavetable cache, which interfaces with a PC system memory supplying frequency compensated wavetable data.
  • WO-A-92 15087 is concerned with musical data storage techniques.
  • a plurality of data segments, such as sound recordings, are stored on a mass storage device such as a disc, and a first portion of each sound segment is stored in an IC memory so as to be instantly available.
  • Addressing circuits read the first portion of a data segment stored in the IC memory, and then the portion of the mass storage device so as to give substantially simultaneous playback of data.
  • the data is recorded on the disc with every nth sample in the data stream also being recorded in a fast block, so that when playback speed is increased, only data from the fast block is re-played.
  • EP-A-0 474 177 discloses a tone signal generating device, which includes waveform generating means for generating digital waveform sample data at a frequency corresponding to a designated pitch. Sequentially generated digital waveform sample data are operated with generated coefficients which corresponds to a desired interpolation characteristic and the operated data is synthesized to form one sample data. In this way, the interpolation characteristic can be controlled for the desired filter characteristic.
  • US-A-4 508 001 discloses an electronic musical instrument having an optical disc memory, which can only be accessed at low speed, and a semiconductor memory which can be accessed at high speed.
  • the semiconductor memory is used to store an initial portion of a waveshape, and the disc memory stores the remaining portion.
  • a readout circuit starts to readout the initial portion and the remaining portion simultaneously, to compensate for the low-speed accessibility of the optical disc memory.
  • Wavetable synthesizers generate sounds through digital processing of entire digitized sound waveforms or portions of digitized sound waveforms stored in wavetable memory. See U.S. patent application serial No. 08/334,461, entitled “Digital Signal Processor Architecture for Wavetable Audio Synthesizer,” by Norris, et al .
  • Wavetable synthesizers generate sounds by "playing back" from wavetable memory, to a digital-to-analog converter (DAC), a particular digitized waveform.
  • DAC digital-to-analog converter
  • the addressing rate of the wavetable data controls the frequency or pitch of the analog output.
  • the bit width of the wavetable data affects the resolution of the sound being generated. For example, better resolution can be achieved with 16-bit wide data versus 8-bit wide data. 16-bit digital audio is becoming the standard in the industry.
  • Wavetable synthesizers have application in personal computers.
  • personal computers are manufactured with only limited audio capabilities. These limited capabilities provide monophonic tone generation to provide audible signals to the user concerning various simple functions, such as alarms or other user alert signals.
  • the typical personal computer system has no capability of providing stereo, high-quality audio which is a desired enhancement for multimedia and video game applications, nor do they have built-in capability to generate or synthesize music or other complex sounds.
  • Music synthesis capability is necessary when the user desires to use a musical composition application to produce or record sounds through the computer to be played on an external instrument, or through analog speakers and in multimedia (CD-ROM) applications as well.
  • a number of add-on products have been developed.
  • One such line of products is referred to in the industry as a sound card.
  • These sound cards are circuit boards carrying a number of integrated circuits, many times including a wavetable synthesizer, wavetable memory and other associated circuitry which the user installs in expansion slots provided by the computer manufacturer.
  • the expansion slots provide an interface to the system bus thereby enabling the host processor to access sound generation and control functions on the board under the control of application software.
  • Typical sound cards also provide MIDI interfaces and game ports to accept inputs from MIDI instruments such as keyboard and joysticks for games.
  • Ultrasound card is that offered by Advanced Gravis and Forte under the name Ultrasound.
  • This sound card is an expansion slot embodiment which incorporates into one chip (the "GF-1") a wavetable synthesizer, MIDI and game interfaces, DMA control and Adlib Sound Blaster compatibility logic.
  • the Ultrasound card includes on-board DRAM (1 megabyte) for wavetable data; an address decoding chip; separate analog circuitry for interfacing with analog inputs and outputs; a separate programmable ISA bus interface chip; an interrupt PAL chip; and a separate digital-to-analog/analog-to-digital converter chip. See U.S. patent application serial No. 072,838, entitled “Wave Table Synthesizer,” by Travers, et al., which is incorporated herein by reference.
  • On-board sound card memory typically has a size of between one-half to four megabytes and stores all the wavetable data used to synthesize music. At a cost of about $25.00 per megabyte, sound card memory cost is a significant factor in the overall cost of the sound card. Therefore, if PC system memory could be used to supply the wavetable data, thereby eliminating or reducing the need for sound card memory, sound cards would be less expensive.
  • PC system memory to store wavetable data, however, raises some concerns.
  • One concern is that available PC system memory is limited and cannot be spared for wavetable data. However, this should be less of a concern in future state-of-the-art PCs which are expected to contain larger system memories and should have space available for wavetable data.
  • Another concern with using system memory is the numerous accesses to memory that are required by prior art synthesizers. For example, prior art wavetable synthesizers which can synthesize thirty-two independent voices (i.e., instrument sounds) must access memory thirty-two times every 22.7 microseconds to retrieve the required data samples. If this number of accesses was made to system memory, an unacceptably high percentage of the system bus bandwidth would be used for synthesizer operations, and thus less of the bus bandwidth could be used for other PC operations.
  • synthesizer might process wavetable data faster than it receives it from system memory (i.e., faster than the system's maximum bus latency). Such a situation would be unacceptable since the processed data would have gaps, and undesirable pops would occur in the synthesized music as it is played.
  • the present invention provides a method of providing a frequency compensated version of a first patch of wavetable data having a first sample frequency and stored in a first location of a memory, wherein said frequency compensated patch is stored in said memory and either said first patch or said frequency compensated patch is accessed from said memory by a digital wavetable audio synthesizer and used to generate digital audio signals having a second sample frequency which is higher than said first sample frequency, said method comprising the steps of:
  • the present invention will be described with reference to a PC audio circuit which is designed to interface with and provide audio enhancement to a host personal computer of the type including a central processor, system memory and system bus.
  • the PC audio circuit includes a cache memory that is of a significantly reduced size and cost and can only store portions of the total wavetable data at a time. Instead, all the wavetable data is stored in system memory of the host PC and transferred in portions to the cache memory, as needed by the PC audio circuit.
  • the PC audio circuit processes the data and generates digital audio signals, such as music or sound effects. Because the cache memory is of reduced size and cost, the PC audio circuit has a lower overall cost than prior art systems.
  • the PC audio circuit processes several frames of data samples for a voice before processing the next designated voice.
  • several wavetable data samples for a given voice can be retrieved from system memory at one time and made available in the cache memory, thereby reducing the total number of accesses to memory required and the percentage use of system bus bandwidth.
  • Processing the data samples in this manner also allows for certain parallel processing operations. For example, while a plurality of data samples are being processed for active voices, other groups of data samples can be retrieved from system memory and made available for processing in the cache memory. This ensures a continuous supply of data and reduces concerns about the maximum allowable system bus access latency.
  • the PC audio circuit retrieves several wavetable data samples at once, it is preferable that a voice's data samples be organized together in a block in system memory. Thus, if a consecutive series of data samples are requested, they can be accessed using the system memory's page mode which will increment through the data samples in the block.
  • the bus between system memory and the PC audio circuit is a PCI bus, thereby enabling data accessed through the page mode to be transmitted to the PC audio circuit in burst mode.
  • the PC audio circuit includes a PCI bus interface block, an internal address data bus, digital signal processor, output control state machine, internal bus arbiter, and cache memory.
  • the PC audio circuit can be formed on a monolithic integrated circuit, which includes the cache memory or with the cache memory external to the integrated circuit. Data in the system memory is transmitted over the PCI bus, through the PCI interface block, over the internal bus, and into the cache memory.
  • the digital signal processor performs computations and other processing to translate the data samples in the cache memory into digital audio signals suitable for conversion into desired analog audio signals.
  • the DSP can generate up to 32 independent digital audio signals or voices at a 44.1 KHz frame rate.
  • the digital audio signals generated for each voice by the DSP are accumulated in the cache memory, or can be accumulated in a separate cache memory, until they are ready to be output to an external digital-to-analog converter (DAC).
  • the output control state machine (OCSM) controls the transmission of the accumulated data from the cache out to the external DAC at a sample rate of 44.1 KHz.
  • the internal bus arbiter (IBA) is responsible for directing traffic between the various blocks that will access the internal bus, including the OCSM, the cache, the PCI interface block, and the DSP.
  • the internal bus operates at 33 MHz, along with most of the logic, from a clock that is provided as part of the PCI standard.
  • the cache preferably is a low-cost SRAM having a capacity of about 8 to 32 kilobytes.
  • the available memory in the cache can be assigned to data sample storage, accumulator storage, and general storage for the DSP.
  • Data samples can be stored in data queues A and B, while the digital audio signals generated by the DSP can be stored in accumulator queues A and B.
  • data queues A and B each store up to 64 16-bit data samples for each of 32 voices, while accumulator queues A and B each accumulate the generated data samples for up to 32 voices.
  • the generated data samples are accumulated together in accumulator queue A or B as one set of 64 16-bit data samples.
  • the PCI interface block detects when there is a need to update the cache with data samples and initiates bus master requests.
  • the addresses in system memory from which the data samples are to be retrieved are sent from the PCI interface block to the PCI address bus.
  • data samples retrieved from system memory are transmitted on the internal data bus to the cache.
  • 128 data samples are loaded into the cache (64 data samples in each of data queues A and B) for each active voice.
  • the DSP processes the data samples in one of the data queues, for the first active voice. The other data queue is presently inactive. Then, the DSP processes the data samples for the next designated active voice. As the DSP processes these data samples, the data samples just generated by the DSP are accumulated in one of the accumulator queues. This process continues until all active voices have been processed, and then the accumulator queues toggle and the other accumulator queue will accumulate generated data samples while the accumulated data samples in the first accumulator queue can be output to an external DAC.
  • the PCI interface block sends requests on the PCI bus for additional data samples from system memory.
  • the data samples retrieved from system memory are stored in the first data queue, thereby writing over the data samples just processed. While these data samples are being retrieved, the DSP processes the data samples in the other queue. Then, the data queues toggle, and the process continues, allowing up to 64 data samples to be processed at a time.
  • the DSP processes the data samples at the same frequency as the sampling frequency used during analog-to-digital conversion (recording) of the original audio signal
  • the resulting audio signal will sound the same (i.e., have the same frequency) as the original audio signal used to create the data samples.
  • the latency problem for F c > 2 can be avoided by having the PC audio circuit retrieve only the data samples which will be processed and not the data samples which will be skipped by the DSP. Thus, all the data samples retrieved and stored in a data queue will be processed.
  • This feature is implemented by providing means in PCI interface block for accessing the F c values for the active voices, and then calculating the next system memory address for retrieving data samples for a given voice based on the current system memory address and the F c value. Retrieving only select samples for each active voice when F c > 1 reduces the available PCI bandwidth since the burst mode cannot be used for transmitting the data samples. Even if the burst mode is not used, the PC audio circuit's percentage usage of the bandwidth may be acceptable, but the percentage usage will be less desirable.
  • the PC audio system includes driver software which facilitates the creation of frequency compensated files or patches of wavetable data which are stored in system memory and can be transmitted to cache memory in burst mode, thereby reducing the PCI bus bandwidth requirements.
  • the driver software facilitates the creation of a frequency compensated version of the original patch, containing only every fourth sample. This frequency compensated file or patch is stored in system memory and can be transmitted in burst mode to the PC audio circuit for processing by the DSP.
  • a suitable PC audio system includes a PC audio circuit, of the type described above, driver software, and a MIDI or a comparable file.
  • the MIDI file contains parameters that define the song or other audio signals to be generated by the PC audio circuit.
  • the driver software performs the function of interpreting the parameters contained in the file and programming the PC audio circuit to generate the desired audio signals from wavetable data in system memory. As discussed below, the driver software also contains instructions which control the function of deriving frequency compensated patches for high F c voices.
  • the system CPU determines for a given voice the ratio of the desired frequency for the voice to the recording frequency of the data in system memory associated with the voice. For F c > 2.0, the CPU drives a frequency compensated patch as described below.
  • a frequency compensated patch can be derived a number of ways.
  • One way, which requires the least CPU processing, is to copy or transpose a fraction of the wavetable data samples from the original patch for the voice into a new file or patch stored in system memory.
  • the fraction of data samples transposed to the new patch is based on the F c value calculated by the system CPU.
  • the frequency compensated patch has a frequency that is higher than the frequency of the original patch. For example, if every fourth data sample is copied from the original patch to create the frequency compensated patch, the frequency compensated patch has a frequency which is four times the frequency of the original patch; the frequency compensated patch has an effective frequency (F eff ) equal to four.
  • the goal when deriving frequency compensated patches is to provide an effective patch frequency which is high enough that the PC audio circuit does not have to more than double the patch's frequency to generate the desired audio signals.
  • Another way of deriving frequency compensated patches is to digitally filter the original patch of wavetable data samples.
  • Digitally filtering requires more CPU processing but is preferred over the above technique since digitally filtering removes the high frequency component of wavetable data such that the generated digital audio signals have less noise.
  • One example of a digital filtering technique is to take the average of every nth sample.
  • Another example, which requires even more CPU processing power, is to calculate the average of the moving average.
  • the system CPU can be directed by the driver software to either: (1) derive all of the required frequency compensated patches just prior to the PC audio circuit's processing of the MIDI file; or (2) derive each patch as the PC audio circuit processes through the file.
  • the PC audio circuit When the PC audio circuit processes a frequency compensated patch, adjustments must be made to account for the higher frequency of the patch.
  • the driver software programs the PC audio circuit to make these adjustments.
  • the following description sets forth the preferred and alternative embodiments of a PC audio circuit which can be formed on a monolithic integrated circuit.
  • the PC audio circuit is designed to interface with and provide audio enhancement to a host personal computer of the type including a central processor, system memory and system bus.
  • the fundamental difference between the PC audio circuit of the present invention and prior art PC audio circuits is that the local memory is of a significantly reduced size (e.g., 8-32 kilobytes) and can only store portions of the total wavetable data at a time. Instead, all the wavetable data (e.g., 1-4 megabytes) is stored in system memory of the host PC and transferred in portions to the PC audio circuit's local memory, also known as a cache memory, as needed by the PC audio circuit.
  • the PC audio circuit uses the data to generate digital audio signals such as music or sound effects.
  • system memory is utilized to store wavetable data, thereby reducing the size of the local memory, the overall cost of the PC audio circuit will be reduced.
  • system memory raises concerns that: (i) an unacceptable percentage of system bus bandwidth will be used; and (ii) the PC audio circuit will process wavetable data faster than the host computer's maximum bus latency.
  • the PC audio circuit of the present invention is designed to alleviate these concerns.
  • the typical frame rate for audio is 44.1 KHz. At this frame rate, each frame is approximately 22.7 microseconds. Thus, if a prior art PC audio circuit generates 32 voices during a frame, 32 data accesses must be made to memory during this short time period. This is not a problem if the data accesses are to local memory. If the number of accesses is made to system memory, however, bus bandwidth usage and bus latency would become a concern.
  • the PC audio circuit of the present invention processes several frames of data samples for a voice before processing the next designated voice.
  • several wavetable data samples for a given voice can be retrieved from system memory at one time and made available in the cache memory, thereby reducing the total number of accesses to memory required and the percentage use of bus bandwidth.
  • Processing the data samples in this manner also allows for certain parallel processing operations. For example, while a plurality of data samples are being processed for active voices, other groups of data samples can be retrieved from system memory and made available for processing in the PC audio circuit's cache memory. This ensures a continuous supply of data and reduces concerns about the bus access latency.
  • the PC audio circuit of the present invention retrieves several wavetable data samples at once, it is preferable that a voice's data samples be organized together in a block in the system memory. Thus, if a consecutive series of data samples are requested, they can be accessed using the system memory's page mode which will increment through the data samples in the block. If the bus between system memory and the PC audio circuit is a PCI bus (i.e., a higher performance bus), data accessed through the page mode can be transmitted to the PC audio circuit in burst mode (i.e., at a faster rate). Use of the burst mode decreases the maximum bus latency and the percentage of bandwidth usage.
  • PCI bus i.e., a higher performance bus
  • FIG. 1 illustrates the preferred architecture of the PC audio circuit.
  • the PC audio circuit 10 includes a PCI bus interface block 12, internal address data bus 14, digital signal processor 16, output control state machine 18, internal bus arbiter 20, and cache memory 22. Data is passed from the PCI bus 24, through PCI interface block 12, over the internal bus 14, and into the cache 22.
  • the PC audio circuit 10, including cache 22, can be formed on a monolithic integrated circuit.
  • the dashed box in Figure 1 represents the perimeter of a preferred embodiment of such an integrated circuit. Alternatively, the cache 22 may be external to the circuit.
  • the digital signal processor (DSP) 16 operates on the data similarly to the wavetable synthesizer DSP disclosed in U.S. patent application Serial No. 08/334,461, by Norris, et al., which is incorporated herein by reference.
  • the DSP 16 of the present invention performs computations and other processing to translate raw wavetable data into digital audio signals suitable for conversion into the desired analog audio signals.
  • the DSP 16 operates from instructions stored in ROM code 26 and preferably can generate up to 32 independent digital audio signals or voices at a 44.1 KHz frame rate.
  • the DSP 16 of the present invention processes several frames of wavetable data samples from voice to voice, rather than one data sample per voice per frame.
  • the implementation details for DSP 16 are within the level of skill possessed by those of ordinary skill in the art.
  • the digital audio signals generated for each voice by DSP 16 are accumulated in cache 22, or can be accumulated in a separate cache memory, until they are ready to be output on port 28 to an external audio digital-to-analog converter (DAC).
  • the output control state machine (OCSM) 18 is responsible for transmitting the accumulated data from the cache 22 out to the external DAC at the sample rate of 44.1 KHz. OCSM 18 utilizes its own 16.9344 MHz clock 30 to ensure synchronization with to the sample rate.
  • the internal bus arbiter (IBA) 20 is responsible for directing traffic between the various blocks that will access the internal bus 14, including the OCSM 18, the cache 22, the PCI interface block (PCI I/F block) 12 and the DSP 16.
  • the internal bus 14 operates at 33 MHz, along with most of the logic, from a clock 32 that is provided as part of the PCI standard.
  • the internal bus 14 has a 32-bit data bus and a 16-bit address bus.
  • the address map for the internal bus is as follows: Address Range (hexadecimal) Data 0000 through 1FFF 8Kx32 SRAM cache. This space provides the port into the cache memory. 2000 through 201F 32 PCI I/F-block voice cache status registers. There are 32 of these registers, one to correspond to each of the 32 possible voices. Bit[0] of these registers is set (by the DSP) when that voice needs cache queue A updated with data from the PCI bus. ( See discussion below regarding cache.) Bit[1] of these registers is set (by the DSP) when that voice needs cache queue B updated with data from the PCI bus.
  • PCI I/F-block system address loop point registers These are 32-bit pointers to each of the 32 voices' system memory loop point addresses for the sample. As data for a voice is brought into the PC audio circuit from the PCI bus, if the address crosses over the end address, then it jumps back to the address specified by these registers. 2080 through 209F 32 PCI I/F-block current system address registers. These registers store the current address in system memory from which the sample data for each of the 32 voices is accessed. They increment whenever a new 32-bit word is brought in from system memory to the cache. They jump from the system address end register location to the system address loop point location when the current address passes the end point. 3000 OCSM sample count register.
  • the DSP can observe bit 7 of this counter to determine when it is time to start accumulating the next group of 64 samples. 3001 OCSM control register. When bit[0] of this register is cleared (by the DSP) then no data is passed out to the DAC. When it is high, then data is drawn from the accumulator cache and passed to the external DAC.
  • PCI Bus The PCI bus is assigned a block of 256 I/O (byte wide) addresses through standard PCI plug and play circuitry. These addresses are used by the system's central processor as follows: I/O Address Range (hexadecimal) Data 80 through 81 Internal bus address register. The system CPU is allowed access to the internal bus by setting up the 16-bit address in these two ports and writing or reading through the data ports below. 84 through 87 Internal data bus port. Access (read and write) to the internal bus is allowed via this port with the internal address specified by 80-81 above.
  • Wavetable Cache RAM .
  • Cache 22 preferably is a low-cost SRAM having a capacity of about 8 to 32 kilobytes.
  • the available memory in cache 22 can be assigned to data sample storage, accumulator storage, and general storage for DSP 16.
  • Figure 2 depicts how the available memory can be suitably assigned between data sample storage and accumulator storage.
  • Data samples can be stored in data queues "A" and "B", while the digital audio signals generated by DSP 16 (generated data samples) can be stored in accumulator queues "A" and "B.” See Figure 2.
  • Data queues A and B can each store up to 64 16-bit data samples for each of 32 voices.
  • Accumulator queues A and B each can accumulate the generated data samples for up to 32 voices.
  • the generated data samples are accumulated together in queue A or B as one set of 16-bit data samples. There can be up to 64 data samples in a set.
  • Data queues A and B together can store up to 8 kilobytes, while accumulator queues A and B together can store up to 256 bytes. Additional memory can be provided in cache 22 for general DSP storage.
  • one of the two data queues for a voice is used to store data samples as they are retrieved from system memory while the other data queue supplies data samples to the DSP 16.
  • data queue A supplies data samples to DSP 16
  • data queue B stores data samples retrieved from system memory.
  • Data queue B is filled with the next set of data samples to be processed by DSP 16, and must be filled before the DSP completes the processing of the data samples in data queue A. Otherwise, there will be undesirable gaps in the generated digital audio signals.
  • data queues A and B are toggled, and DSP 16 processes the data samples stored in data queue B, and data samples retrieved from system memory are stored in data queue A. This process continues as long as DSP 16 processes data samples.
  • one of the accumulator queues is used to supply accumulated data samples to an external DAC, while the other accumulator accumulates data samples generated by DSP 16.
  • accumulator queue A supplies accumulated data samples
  • accumulator queue B accumulates data samples.
  • the generated data samples for all of the active voices must be accumulated in accumulator queue B before all the data samples in accumulator queue A have been transmitted to the external DAC. Otherwise, there will be gaps in the analog signal.
  • accumulator queues A and B toggle, and data samples accumulated in data queue B are transmitted to the external DAC, and generated data samples are accumulated in data queue A.
  • An address map for a wavetable cache suitable for the present invention is as follows: Address Range (hexadecimal) Data 0000 through 001F Cache queue "A" for voice 0 of 31 0020 through 003F Cache queue “B” for voice 0 of 31 0040 through 005F Cache queue "A” for voice 1 of 31 0060 through 007F Cache queue “B” for voice 1 of 31 0080 through 07BF Caches for voices 2 through 30 07C0 through 07DF Cache queue "A” for voice 31 of 31 07E0 through 07FF Cache queue “B” for voice 31 of 31 0800 through 083F Accumulator cache "A” 0840 through 087F Accumulator cache "B” 0880 through 1FFF General Storage for the DSP
  • PCI interface block 12 includes PCI interface controller 34, buffers 36 and 38, internal bus address register 40, and PCI I/F block registers 42.
  • PCI controller 34 is connected to the PCI address bus, buffers 36 and 38, internal bus address register 40, and PCI I/F block registers 42.
  • Buffer 36 connects to the PCI data bus, PCI controller 34, internal bus address register 40, and the internal data bus, while buffer 38 connects to the PCI address bus, PCI controller 34, and PCI I/F block registers 42.
  • Internal bus address register 40 connect to PCI controller 34, the internal address bus, the internal data bus, and buffer 36.
  • PCI I/F block registers 42 are connected to buffer 38, PCI controller 34, and the internal data and address buses.
  • PCI I/F block registers 42 contain status and address information which indicates which voice requires additional data samples to be stored in cache and the address in system memory to obtain the data samples. A detailed description of these registers is set forth in the above address map for the internal bus.
  • Internal bus address register 40 is used by the system central processor to access the PC audio circuit registers on the internal bus. For example, the central processor may need access to the PCI I/F block registers in order to write system memory addresses which indicate wavetable data storage locations. Internal bus address register 40 also stores the addresses of cache 22 at which wavetable data samples from system memory are stored. As set forth in the I/O address table above, the central processor accesses a register on the internal bus by writing its address, via buffer 36. in the internal bus address register 40. Read or write accesses to a particular register is provided through the port specified in the above table.
  • PCI interface controller 34 Based on the status information stored in PCI I/W block registers 42, PCI interface controller 34 detects when there is a need to update cache 22 with data samples and initiates bus master requests. Under the control of PCI interface controller 34, the addresses in system memory from which data samples are to be retrieved are sent from the PCI interface block registers 42, through buffer 38, to the PCI address bus. Retrieved data samples from system memory are sent on the PCI data bus to buffer 36. Under the control of PCI interface controller 34, data samples in buffer 36 are transmitted on the internal data bus to cache 22. The addresses in cache 22 for storing the data samples are contained in internal bus address register 40 and transmitted on the internal address bus. Preferably, PCI interface block 12 can request data samples for more than one active voice at a time.
  • PCI controller 34 calculates the cache addresses for storing the data samples by determining which voice is being updated, whether queue A or B is being updated, and which 32-bit word of the queue is being updated.
  • PCI controller 34 contains thirty-two 5-bit counters-one for each voice-to determine which sample in the queue is the next to be updated by the PCI interface block 12.
  • PCI interface block registers 42 include thirty-two 1-bit toggle registers-one for each voice-to indicate which queue each voice is currently using. These registers toggle each time a queue is filled by the PCI interface block 12.
  • the PCI controller 34 stores the calculated cache addresses in the internal bus address register 40 and controls when they are output onto the internal address bus.
  • the implementation details of PCI interface block 12 are within the level of skill possessed by those of ordinary skill in the art.
  • FIG. 4 sets forth a block diagram of OCSM 18.
  • OCSM 18 includes a control block 46, with buffer, address decode and control block 48, 7-bit counter 50, and 22.66 microsecond timer 54.
  • control block 46 is connected to the internal address and data buses, the address decode and control block 48, the 7-bit counter 50, and FIFO 52.
  • Address decode and control block 48 is connected to the internal address bus, control block 46, 7-bit counter 50, and timer 54.
  • Seven-bit counter 50 is connected to address decode and control block 48, control block 46, and FIFO 52.
  • the seven-bit counter is described in the above address map for the internal bus, and is referred to as the OCSM sample count register.
  • FIFO 52 can store two data samples, one in a top location and the other in a bottom location, and is connected to control block 46, 7-bit counter 50, timer 54, and an external DAC.
  • Timer 54 connects to address decode and control block 48, FIFO 52, and clock generator 30.
  • the DSP 16 enables OCSM 18 by writing to its control register.
  • OCSM 18 By writing to its control register.
  • two data samples are transmitted, under the control of control block 46, on the internal data bus from an accumulator queue in cache 22, through the buffer in the control block 46, into FIFO 52. Every 22.66 microseconds, as indicated by timer 54, the FIFO 52 shifts the data sample in the bottom location to the top location, thereby enabling it to be output to the external DAC.
  • the data sample previously in the top location is discarded.
  • another data sample is retrieved from cache 22 and stored in the bottom location of FIFO 52, and, under the control of address decode and control block 48, the 7-bit counter 50 is incremented.
  • Address decode and control block 48 calculates the addresses of data samples to be retrieved from cache 22 from the 7-bit counter 50 and cache address information supplied on the internal address bus. These calculated addresses are sent to control block 46 where they are used to request specific data samples from cache 22.
  • the implementation details of OCSM 18 are within the level of skill possessed by those of ordinary skill in the art. DSP 16 can observe bit 7 of counter 50 to determine when it is time to start accumulating the next group of data samples.
  • Internal bus arbiter 20 is a simple arbiter that has a fixed priority for bus requests from: (i) the DSP 16 (lowest priority); (ii) the PCI bus interface block 12 (middle priority); and (iii) the OCSM 18 (highest priority).
  • Arbiter 20 grants bus access to the requesting device having highest priority, at which point that device is free to drive the address bus and either the READ or WRITE signal. If the access is a read, then the priority device will capture or use the data from the data bus; if the access is a write, then the priority device will drive the data bus.
  • the implementation details of arbiter 20 are within the level of skill possessed by those of ordinary skill in the art.
  • the DSP 16 processes the data samples in one of the data queues, for the first active voice (e.g., voice 0). The other data queue is presently inactive. Then, DSP 16 processes the data samples for the next designated active voice (e.g., voice 1). As DSP 16 processes these data samples, the data samples just generated by DSP 16 are accumulated in one of the accumulator queues. This process continues until all active voices have been processed, and then the accumulator queues toggle and the other accumulator queue will accumulate generated data samples while the accumulated data samples in the first accumulator queue can be output to the external DAC.
  • the accumulator queues toggle and the other accumulator queue will accumulate generated data samples while the accumulated data samples in the first accumulator queue can be output to the external DAC.
  • the PCI interface block 12 sends requests on the PCI bus for additional data samples from system memory.
  • the data samples retrieved from system memory are stored in the first data queue, thereby writing over the data samples just processed. While these additional data samples are being retrieved, the DSP 16 processes the data samples in the other queue. Then, the data queues toggle, and the process continues, allowing up to 64 data samples to be processed at a time.
  • DSP 16 processes the data samples at the same frequency as the sampling frequency used during analog-to-digital conversion (recording) of the original audio signal, then when the audio signals generated by the DSP are converted to analog and played, the resulting audio signal will sound the same (i.e., have the same frequency) as the original audio signal used to create the data samples.
  • F c its frequency ratio
  • F c 1 for each of the active voices
  • the latency problem for F c > 2 can be avoided by having the PC audio circuit 10 retrieve only the data samples which will be processed and not the data samples which will be skipped by DSP 16. Thus, all the data samples retrieved and stored in a data queue of cache 22 will be processed.
  • the implementation details for this feature are within the level of skill possessed by those of ordinary skill in the art.
  • the PC audio system includes driver software which facilitates the creation of frequency compensated files or patches of wavetable data which are stored in system memory and can be transmitted to cache memory 22 in burst mode, thereby reducing the PCI bus bandwidth requirements.
  • the driver software facilitates the creation of a frequency compensated version of the original patch, containing only every fourth sample.
  • This frequency compensated file or patch is stored in system memory and can be transmitted in burst mode to PC audio circuit 10 for processing by DSP 16.
  • FIG. 5 sets forth a block diagram of a PC audio system which can provide frequency compensated patches as described above.
  • the PC audio system includes a PC audio circuit 10, of the type described above, driver software 62, and a MIDI or comparable file 64.
  • PC audio circuit 10 is connected to system memory 60 through PCI bus 24.
  • File 64 contains parameters that define the song or other audio signals to be generated by PC audio circuit 10.
  • the driver software 62 performs the function of interpreting the parameters contained in file 64 and programming PC audio circuit 10 to generate the desired audio signals from wavetable data in system memory. As discussed below, the driver software 62 also contains instructions which control the function of providing frequency compensated patches for high F c voices.
  • driver software 62 Upon execution of the instructions in driver software 62, the system CPU performs the following steps:
  • a frequency compensated patch can be derived a number of ways.
  • One way, which requires the least CPU processing, is to copy or transpose a fraction of the wavetable data samples from the original patch for the voice into a new file or patch stored in system memory.
  • the fraction of data samples transposed to the new patch is based on the F c value calculated by the system CPU.
  • the frequency compensated patch has a frequency that is higher than the frequency of the original patch. For example, if every fourth data sample is copied from the original patch to create the frequency compensated patch, the frequency compensated patch has a frequency which is four times the frequency of the original patch; the frequency compensated patch has an effective frequency (F eff ) equal to four.
  • Another way of deriving frequency compensated patches is to digitally filter the original patch of wavetable data samples.
  • Digitally filtering requires more CPU processing but is preferred over the above technique of transposing a fraction of the wavetable data samples in a patch.
  • the high frequency component of a patch of wavetable data translates into noise in a frequency compensated patch.
  • Digitally filtering removes the high frequency component and thus results in a frequency compensated patch providing cleaner sound.
  • Another example of a digital filtering technique, which requires even more CPU processing power, is to calculate the average of the moving average.
  • the system CPU can be directed by the driver software 62 to either: (1) derive all of the required frequency compensated patches just prior to the PC audio circuit's 10 processing of MIDI file 64; or (2) derive each patch as the PC audio circuit 10 processes through file 64.
  • the former technique requires more memory since all the frequency compensated patches are derived and stored in system memory prior to processing by the PC audio circuit 10.
  • the former technique also causes a delay before the MIDI file 64 can be played.
  • the later technique is preferred if the CPU has enough power and excess time to perform the necessary calculations as file 64 is processed.
  • frequency compensated patches have application in PC audio systems which obtain wavetable data from local rather than system memory.
  • digitally filtering patches of wavetable data filters out high frequencies which translate into noise when the patch is played at a frequency higher than the recording frequency (e.g., F c > 2).
  • Wavetable data, stored in a local memory and to be used by a wavetable synthesizer to generate digital audio signals with a high frequency ratio, can be preprocessed, through the above digital filtering techniques, such that the synthesizer generates clean digital audio signals.

Abstract

The PC audio circuit (10) described interfaces with and provides audio enhancement to a host personal computer of the type including a central processor, system memory and a system bus. The PC audio circuit (10) includes a digital signal processor (DSP) (16) for processing wavetable data and generating digital audio signals for a plurality of voices. The wavetable data is stored in the host computer's system memory and transferred in portions, as needed by the DSP (16) to a smaller, low-cost cache memory (22) included with the PC audio circuit (10). The DSP (16) processes several frames of data samples for an active voice before processing another voice. Processing in this manner alleviates concerns about the percentage use of system bus bandwidth and the maximum allowable system bus latency. These concerns are further alleviated by deriving frequency compensated wavetable data and storing it in system memory to be retrieved by the DSP (16) for generating digital audio signals having high frequency ratios. Digital audio signals generated for each active voice are accumulated in cache memory (22). When the digital audio signals for all active voices have been accumulated, the accumulated data is transmitted from the cache memory (22) to an external digital-to-analog converter. Since wavetable data is stored in system memory, the cache memory (22) is smaller and less expensive than the local memory in prior art PC audio circuits. Thus, the described PC audio circuit (10) has a lower overall cost.

Description

  • This invention relates to a PC audio system including a wavetable audio synthesizer and a memory which supplies frequency compensated wavetable data. More particularly, this invention relates to a PC audio system, including a wavetable audio synthesizer and wavetable cache, which interfaces with a PC system memory supplying frequency compensated wavetable data.
  • WO-A-92 15087 is concerned with musical data storage techniques. A plurality of data segments, such as sound recordings, are stored on a mass storage device such as a disc, and a first portion of each sound segment is stored in an IC memory so as to be instantly available. Addressing circuits read the first portion of a data segment stored in the IC memory, and then the portion of the mass storage device so as to give substantially simultaneous playback of data. In order to simulate fast playback of data, the data is recorded on the disc with every nth sample in the data stream also being recorded in a fast block, so that when playback speed is increased, only data from the fast block is re-played.
  • EP-A-0 474 177 discloses a tone signal generating device, which includes waveform generating means for generating digital waveform sample data at a frequency corresponding to a designated pitch. Sequentially generated digital waveform sample data are operated with generated coefficients which corresponds to a desired interpolation characteristic and the operated data is synthesized to form one sample data. In this way, the interpolation characteristic can be controlled for the desired filter characteristic.
  • US-A-4 508 001 discloses an electronic musical instrument having an optical disc memory, which can only be accessed at low speed, and a semiconductor memory which can be accessed at high speed. The semiconductor memory is used to store an initial portion of a waveshape, and the disc memory stores the remaining portion. A readout circuit starts to readout the initial portion and the remaining portion simultaneously, to compensate for the low-speed accessibility of the optical disc memory.
  • Several types of digital "synthesizers," i.e. devices that generate sound through audio digital-signal-processing, are now available. One modern type of digital synthesizer is a wavetable synthesizer. Wavetable synthesizers generate sounds through digital processing of entire digitized sound waveforms or portions of digitized sound waveforms stored in wavetable memory. See U.S. patent application serial No. 08/334,461, entitled "Digital Signal Processor Architecture for Wavetable Audio Synthesizer," by Norris, et al.
  • Wavetable synthesizers generate sounds by "playing back" from wavetable memory, to a digital-to-analog converter (DAC), a particular digitized waveform. The addressing rate of the wavetable data controls the frequency or pitch of the analog output. The bit width of the wavetable data affects the resolution of the sound being generated. For example, better resolution can be achieved with 16-bit wide data versus 8-bit wide data. 16-bit digital audio is becoming the standard in the industry.
  • Wavetable synthesizers have application in personal computers. Typically, personal computers are manufactured with only limited audio capabilities. These limited capabilities provide monophonic tone generation to provide audible signals to the user concerning various simple functions, such as alarms or other user alert signals. The typical personal computer system has no capability of providing stereo, high-quality audio which is a desired enhancement for multimedia and video game applications, nor do they have built-in capability to generate or synthesize music or other complex sounds. Musical synthesis capability is necessary when the user desires to use a musical composition application to produce or record sounds through the computer to be played on an external instrument, or through analog speakers and in multimedia (CD-ROM) applications as well.
  • Additionally, users at times desire the capability of using external analog sound sources, such as stereo equipment, microphones, and non-MIDI electrical instruments, to be recorded digitally and/or mixed with digital sources before recording or playback through their computer. To satisfy these demands, a number of add-on products have been developed. One such line of products is referred to in the industry as a sound card. These sound cards are circuit boards carrying a number of integrated circuits, many times including a wavetable synthesizer, wavetable memory and other associated circuitry which the user installs in expansion slots provided by the computer manufacturer. The expansion slots provide an interface to the system bus thereby enabling the host processor to access sound generation and control functions on the board under the control of application software. Typical sound cards also provide MIDI interfaces and game ports to accept inputs from MIDI instruments such as keyboard and joysticks for games.
  • One prior art sound card is that offered by Advanced Gravis and Forte under the name Ultrasound. This sound card is an expansion slot embodiment which incorporates into one chip (the "GF-1") a wavetable synthesizer, MIDI and game interfaces, DMA control and Adlib Sound Blaster compatibility logic. In addition to this ASIC, the Ultrasound card includes on-board DRAM (1 megabyte) for wavetable data; an address decoding chip; separate analog circuitry for interfacing with analog inputs and outputs; a separate programmable ISA bus interface chip; an interrupt PAL chip; and a separate digital-to-analog/analog-to-digital converter chip. See U.S. patent application serial No. 072,838, entitled "Wave Table Synthesizer," by Travers, et al., which is incorporated herein by reference.
  • On-board sound card memory typically has a size of between one-half to four megabytes and stores all the wavetable data used to synthesize music. At a cost of about $25.00 per megabyte, sound card memory cost is a significant factor in the overall cost of the sound card. Therefore, if PC system memory could be used to supply the wavetable data, thereby eliminating or reducing the need for sound card memory, sound cards would be less expensive.
  • Utilizing PC system memory to store wavetable data, however, raises some concerns. One concern is that available PC system memory is limited and cannot be spared for wavetable data. However, this should be less of a concern in future state-of-the-art PCs which are expected to contain larger system memories and should have space available for wavetable data. Another concern with using system memory is the numerous accesses to memory that are required by prior art synthesizers. For example, prior art wavetable synthesizers which can synthesize thirty-two independent voices (i.e., instrument sounds) must access memory thirty-two times every 22.7 microseconds to retrieve the required data samples. If this number of accesses was made to system memory, an unacceptably high percentage of the system bus bandwidth would be used for synthesizer operations, and thus less of the bus bandwidth could be used for other PC operations.
  • A further concern is that the synthesizer might process wavetable data faster than it receives it from system memory (i.e., faster than the system's maximum bus latency). Such a situation would be unacceptable since the processed data would have gaps, and undesirable pops would occur in the synthesized music as it is played.
  • Therefore, there is a need for a PC audio system which synthesizes music from wavetable data supplied by system memory, but does not utilize an unacceptable percentage of bus bandwidth. Furthermore, there is a need for a PC audio system which obtains data from system memory at a rate which is at least as fast as the rate it processes data (i.e., the maximum bus latency is less than or equal to the PC audio system's rate of processing data).
  • Accordingly, the present invention provides a method of providing a frequency compensated version of a first patch of wavetable data having a first sample frequency and stored in a first location of a memory, wherein said frequency compensated patch is stored in said memory and either said first patch or said frequency compensated patch is accessed from said memory by a digital wavetable audio synthesizer and used to generate digital audio signals having a second sample frequency which is higher than said first sample frequency, said method comprising the steps of:
  • (a) accessing said first patch of wavetable data from said first location of said memory;
  • (b) deriving, from said first patch of wavetable data, a patch of wavetable data which has a third sample frequency greater than said first sample frequency, wherein said derived patch of wavetable data comprises said frequency compensated patch; and
  • (c) storing said frequency compensated patch in a second location of said memory for use by said digital audio synthesizer in generating digital audio signals having said second sample frequency.
  • The present invention will be described with reference to a PC audio circuit which is designed to interface with and provide audio enhancement to a host personal computer of the type including a central processor, system memory and system bus. The PC audio circuit includes a cache memory that is of a significantly reduced size and cost and can only store portions of the total wavetable data at a time. Instead, all the wavetable data is stored in system memory of the host PC and transferred in portions to the cache memory, as needed by the PC audio circuit. The PC audio circuit processes the data and generates digital audio signals, such as music or sound effects. Because the cache memory is of reduced size and cost, the PC audio circuit has a lower overall cost than prior art systems.
  • Unlike prior art PC audio systems, the PC audio circuit processes several frames of data samples for a voice before processing the next designated voice. Thus, several wavetable data samples for a given voice can be retrieved from system memory at one time and made available in the cache memory, thereby reducing the total number of accesses to memory required and the percentage use of system bus bandwidth. Processing the data samples in this manner also allows for certain parallel processing operations. For example, while a plurality of data samples are being processed for active voices, other groups of data samples can be retrieved from system memory and made available for processing in the cache memory. This ensures a continuous supply of data and reduces concerns about the maximum allowable system bus access latency.
  • Since the PC audio circuit retrieves several wavetable data samples at once, it is preferable that a voice's data samples be organized together in a block in system memory. Thus, if a consecutive series of data samples are requested, they can be accessed using the system memory's page mode which will increment through the data samples in the block. Preferably, the bus between system memory and the PC audio circuit is a PCI bus, thereby enabling data accessed through the page mode to be transmitted to the PC audio circuit in burst mode.
  • In the preferred embodiment ofthe present invention, the PC audio circuit includes a PCI bus interface block, an internal address data bus, digital signal processor, output control state machine, internal bus arbiter, and cache memory. The PC audio circuit can be formed on a monolithic integrated circuit, which includes the cache memory or with the cache memory external to the integrated circuit. Data in the system memory is transmitted over the PCI bus, through the PCI interface block, over the internal bus, and into the cache memory.
  • The digital signal processor (DSP) performs computations and other processing to translate the data samples in the cache memory into digital audio signals suitable for conversion into desired analog audio signals. Preferably, the DSP can generate up to 32 independent digital audio signals or voices at a 44.1 KHz frame rate.
  • The digital audio signals generated for each voice by the DSP are accumulated in the cache memory, or can be accumulated in a separate cache memory, until they are ready to be output to an external digital-to-analog converter (DAC). The output control state machine (OCSM) controls the transmission of the accumulated data from the cache out to the external DAC at a sample rate of 44.1 KHz. The internal bus arbiter (IBA) is responsible for directing traffic between the various blocks that will access the internal bus, including the OCSM, the cache, the PCI interface block, and the DSP. The internal bus operates at 33 MHz, along with most of the logic, from a clock that is provided as part of the PCI standard.
  • The cache preferably is a low-cost SRAM having a capacity of about 8 to 32 kilobytes. The available memory in the cache can be assigned to data sample storage, accumulator storage, and general storage for the DSP. Data samples can be stored in data queues A and B, while the digital audio signals generated by the DSP can be stored in accumulator queues A and B. In a suitable embodiment, data queues A and B each store up to 64 16-bit data samples for each of 32 voices, while accumulator queues A and B each accumulate the generated data samples for up to 32 voices. The generated data samples are accumulated together in accumulator queue A or B as one set of 64 16-bit data samples.
  • The PCI interface block detects when there is a need to update the cache with data samples and initiates bus master requests. The addresses in system memory from which the data samples are to be retrieved are sent from the PCI interface block to the PCI address bus. Under the control of the PCI interface block, data samples retrieved from system memory are transmitted on the internal data bus to the cache.
  • At start-up of the preferred PC audio circuit, 128 data samples are loaded into the cache (64 data samples in each of data queues A and B) for each active voice. Once data queues A and B are loaded with data, the DSP processes the data samples in one of the data queues, for the first active voice. The other data queue is presently inactive. Then, the DSP processes the data samples for the next designated active voice. As the DSP processes these data samples, the data samples just generated by the DSP are accumulated in one of the accumulator queues. This process continues until all active voices have been processed, and then the accumulator queues toggle and the other accumulator queue will accumulate generated data samples while the accumulated data samples in the first accumulator queue can be output to an external DAC.
  • Once the data samples for each active voice in the data queue are processed, the PCI interface block sends requests on the PCI bus for additional data samples from system memory. The data samples retrieved from system memory are stored in the first data queue, thereby writing over the data samples just processed. While these data samples are being retrieved, the DSP processes the data samples in the other queue. Then, the data queues toggle, and the process continues, allowing up to 64 data samples to be processed at a time.
  • If the DSP processes the data samples at the same frequency as the sampling frequency used during analog-to-digital conversion (recording) of the original audio signal, then when the audio signals generated by the DSP are converted to analog and played, the resulting audio signal will sound the same (i.e., have the same frequency) as the original audio signal used to create the data samples. When the frequency of the audio signal being played is the same as the recording frequency, its frequency ratio (Fc) equals 1. If Fc > 1, then generated audio signals will have a higher pitch then the signal recorded. If Fc = 1 for each of the active voices, then the maximum allowable PCI bus latency equals the time it takes to process 64 frames of data samples at the 44.1 KHz frame rate. However, if Fc > 1 for one or more active voices, the maximum allowable PCI bus latency is reduced because the DSP processes more than one data sample per frame per voice. For Fc equal to about 2.0, the reduction in the maximum allowable bus latency may become a problem.
  • The latency problem for Fc > 2 can be avoided by having the PC audio circuit retrieve only the data samples which will be processed and not the data samples which will be skipped by the DSP. Thus, all the data samples retrieved and stored in a data queue will be processed. This feature is implemented by providing means in PCI interface block for accessing the Fc values for the active voices, and then calculating the next system memory address for retrieving data samples for a given voice based on the current system memory address and the Fc value. Retrieving only select samples for each active voice when Fc > 1 reduces the available PCI bandwidth since the burst mode cannot be used for transmitting the data samples. Even if the burst mode is not used, the PC audio circuit's percentage usage of the bandwidth may be acceptable, but the percentage usage will be less desirable.
  • In the preferred embodiment of the present invention, the PC audio system includes driver software which facilitates the creation of frequency compensated files or patches of wavetable data which are stored in system memory and can be transmitted to cache memory in burst mode, thereby reducing the PCI bus bandwidth requirements. The frequency compensated files or patches contain only the data samples which will be actually processed by the DSP for a voice having Fc > 2. For example, for an active voice having Fc = 4, the DSP only needs to process every fourth data sample in the patch (the "original patch") of wavetable data associated with this active voice. The driver software facilitates the creation of a frequency compensated version of the original patch, containing only every fourth sample. This frequency compensated file or patch is stored in system memory and can be transmitted in burst mode to the PC audio circuit for processing by the DSP.
  • A suitable PC audio system includes a PC audio circuit, of the type described above, driver software, and a MIDI or a comparable file. The MIDI file contains parameters that define the song or other audio signals to be generated by the PC audio circuit. The driver software performs the function of interpreting the parameters contained in the file and programming the PC audio circuit to generate the desired audio signals from wavetable data in system memory. As discussed below, the driver software also contains instructions which control the function of deriving frequency compensated patches for high Fc voices.
  • The system CPU determines for a given voice the ratio of the desired frequency for the voice to the recording frequency of the data in system memory associated with the voice. For Fc > 2.0, the CPU drives a frequency compensated patch as described below.
  • A frequency compensated patch can be derived a number of ways. One way, which requires the least CPU processing, is to copy or transpose a fraction of the wavetable data samples from the original patch for the voice into a new file or patch stored in system memory. The fraction of data samples transposed to the new patch is based on the Fc value calculated by the system CPU. The frequency compensated patch has a frequency that is higher than the frequency of the original patch. For example, if every fourth data sample is copied from the original patch to create the frequency compensated patch, the frequency compensated patch has a frequency which is four times the frequency of the original patch; the frequency compensated patch has an effective frequency (Feff) equal to four.
  • Since bus latency problems occur when the PC audio circuit generates audio signals at more than twice the recording frequency of a patch of data samples, the goal when deriving frequency compensated patches is to provide an effective patch frequency which is high enough that the PC audio circuit does not have to more than double the patch's frequency to generate the desired audio signals.
  • Another way of deriving frequency compensated patches is to digitally filter the original patch of wavetable data samples. Digitally filtering requires more CPU processing but is preferred over the above technique since digitally filtering removes the high frequency component of wavetable data such that the generated digital audio signals have less noise. One example of a digital filtering technique is to take the average of every nth sample. Another example, which requires even more CPU processing power, is to calculate the average of the moving average.
  • The system CPU can be directed by the driver software to either: (1) derive all of the required frequency compensated patches just prior to the PC audio circuit's processing of the MIDI file; or (2) derive each patch as the PC audio circuit processes through the file.
  • When the PC audio circuit processes a frequency compensated patch, adjustments must be made to account for the higher frequency of the patch. The driver software programs the PC audio circuit to make these adjustments. Thus, if the PC audio circuit is originally programmed to generate digital audio signals at Fc = 8, but the PC audio circuit processes a frequency compensated patch with Feff = 4, the PC audio circuit is then programmed to divide the frequency ratio for processing the data by four such that the data is processed with Fc = 2. Since it is easier to divide by a factor of two in digital circuitry, the frequency compensated files preferably should have an effective frequency which is a factor of two.
  • Brief Description of the Drawings
  • A better understanding of the present invention can be obtained when the following detailed description of the preferred and alternative embodiments is considered in conjunction with the following drawings, in which:
  • Fig. 1 is a block diagram of the PC audio circuit of the present invention as interfaced with the system bus of a host computer;
  • Fig. 2 depicts how memory can be assigned in the cache memory of the present invention;
  • Fig. 3 is a block diagram of the PCI bus interface block of the present invention as interfaced with system and internal buses;
  • Fig. 4 is a block diagram of an output control state machine of the present invention as interfaced with internal buses; and
  • Fig. 5 is a block diagram of a PC audio system which provides frequency compensated wavetable data in accordance with the present invention.
  • Detailed Description I. PC AUDIO CIRCUIT OVERVIEW
  • The following description sets forth the preferred and alternative embodiments of a PC audio circuit which can be formed on a monolithic integrated circuit. The PC audio circuit is designed to interface with and provide audio enhancement to a host personal computer of the type including a central processor, system memory and system bus. The fundamental difference between the PC audio circuit of the present invention and prior art PC audio circuits is that the local memory is of a significantly reduced size (e.g., 8-32 kilobytes) and can only store portions of the total wavetable data at a time. Instead, all the wavetable data (e.g., 1-4 megabytes) is stored in system memory of the host PC and transferred in portions to the PC audio circuit's local memory, also known as a cache memory, as needed by the PC audio circuit. The PC audio circuit uses the data to generate digital audio signals such as music or sound effects.
  • As discussed in the Background of the Invention, if system memory is utilized to store wavetable data, thereby reducing the size of the local memory, the overall cost of the PC audio circuit will be reduced. However, the use of system memory raises concerns that: (i) an unacceptable percentage of system bus bandwidth will be used; and (ii) the PC audio circuit will process wavetable data faster than the host computer's maximum bus latency. The PC audio circuit of the present invention is designed to alleviate these concerns.
  • The typical frame rate for audio is 44.1 KHz. At this frame rate, each frame is approximately 22.7 microseconds. Thus, if a prior art PC audio circuit generates 32 voices during a frame, 32 data accesses must be made to memory during this short time period. This is not a problem if the data accesses are to local memory. If the number of accesses is made to system memory, however, bus bandwidth usage and bus latency would become a concern.
  • Unlike prior art systems, the PC audio circuit of the present invention processes several frames of data samples for a voice before processing the next designated voice. Thus, several wavetable data samples for a given voice can be retrieved from system memory at one time and made available in the cache memory, thereby reducing the total number of accesses to memory required and the percentage use of bus bandwidth. Processing the data samples in this manner also allows for certain parallel processing operations. For example, while a plurality of data samples are being processed for active voices, other groups of data samples can be retrieved from system memory and made available for processing in the PC audio circuit's cache memory. This ensures a continuous supply of data and reduces concerns about the bus access latency.
  • Since the PC audio circuit of the present invention retrieves several wavetable data samples at once, it is preferable that a voice's data samples be organized together in a block in the system memory. Thus, if a consecutive series of data samples are requested, they can be accessed using the system memory's page mode which will increment through the data samples in the block. If the bus between system memory and the PC audio circuit is a PCI bus (i.e., a higher performance bus), data accessed through the page mode can be transmitted to the PC audio circuit in burst mode (i.e., at a faster rate). Use of the burst mode decreases the maximum bus latency and the percentage of bandwidth usage.
  • II. PC AUDIO CIRCUIT ARCHITECTURE
  • Figure 1 illustrates the preferred architecture of the PC audio circuit. As illustrated, the PC audio circuit 10 includes a PCI bus interface block 12, internal address data bus 14, digital signal processor 16, output control state machine 18, internal bus arbiter 20, and cache memory 22. Data is passed from the PCI bus 24, through PCI interface block 12, over the internal bus 14, and into the cache 22. The PC audio circuit 10, including cache 22, can be formed on a monolithic integrated circuit. The dashed box in Figure 1 represents the perimeter of a preferred embodiment of such an integrated circuit. Alternatively, the cache 22 may be external to the circuit.
  • The digital signal processor (DSP) 16 operates on the data similarly to the wavetable synthesizer DSP disclosed in U.S. patent application Serial No. 08/334,461, by Norris, et al., which is incorporated herein by reference. In other words, the DSP 16 of the present invention performs computations and other processing to translate raw wavetable data into digital audio signals suitable for conversion into the desired analog audio signals. The DSP 16 operates from instructions stored in ROM code 26 and preferably can generate up to 32 independent digital audio signals or voices at a 44.1 KHz frame rate. Unlike the wavetable synthesizer disclosed in the above-referenced patent application, however, the DSP 16 of the present invention processes several frames of wavetable data samples from voice to voice, rather than one data sample per voice per frame. The implementation details for DSP 16 are within the level of skill possessed by those of ordinary skill in the art.
  • The digital audio signals generated for each voice by DSP 16 are accumulated in cache 22, or can be accumulated in a separate cache memory, until they are ready to be output on port 28 to an external audio digital-to-analog converter (DAC). The output control state machine (OCSM) 18 is responsible for transmitting the accumulated data from the cache 22 out to the external DAC at the sample rate of 44.1 KHz. OCSM 18 utilizes its own 16.9344 MHz clock 30 to ensure synchronization with to the sample rate. The internal bus arbiter (IBA) 20 is responsible for directing traffic between the various blocks that will access the internal bus 14, including the OCSM 18, the cache 22, the PCI interface block (PCI I/F block) 12 and the DSP 16. The internal bus 14 operates at 33 MHz, along with most of the logic, from a clock 32 that is provided as part of the PCI standard.
  • Internal Bus. The internal bus 14 has a 32-bit data bus and a 16-bit address bus. The address map for the internal bus is as follows:
    Address Range (hexadecimal) Data
    0000 through 1FFF 8Kx32 SRAM cache. This space provides the port into the cache memory.
    2000 through 201F 32 PCI I/F-block voice cache status registers. There are 32 of these registers, one to correspond to each of the 32 possible voices. Bit[0] of these registers is set (by the DSP) when that voice needs cache queue A updated with data from the PCI bus. (See discussion below regarding cache.) Bit[1] of these registers is set (by the DSP) when that voice needs cache queue B updated with data from the PCI bus. After the PCI interface block has successfully updated the data in the cache queue for a voice, then it clears the bit. Bit[2] is high to indicate that the voice is active and low to indicate that the voice is not active. When bit[2] goes low, the current system address register is reset to become the same as the system address start register (see discussion below).
    2020 through 203F 32 PCI I/F-block system address start registers. These are 32-bit pointers to each of the 32 voices' system memory start addresses for the sample. When processing of a voice starts, data is initially brought in starting from this location in system memory.
    2040 through 205F 32 PCI I/F-block system address end registers. These are 32-bit pointers to each of the 32 voices' system memory end addresses for the sample.
    2060 through 207F 32 PCI I/F-block system address loop point registers. These are 32-bit pointers to each of the 32 voices' system memory loop point addresses for the sample. As data for a voice is brought into the PC audio circuit from the PCI bus, if the address crosses over the end address, then it jumps back to the address specified by these registers.
    2080 through 209F 32 PCI I/F-block current system address registers. These registers store the current address in system memory from which the sample data for each of the 32 voices is accessed. They increment whenever a new 32-bit word is brought in from system memory to the cache. They jump from the system address end register location to the system address loop point location when the current address passes the end point.
    3000 OCSM sample count register. This is a 7 bit counter that increments from its starting point, zero, whenever accumulated data is output from the cache and sent to the external DAC. The DSP can observe bit 7 of this counter to determine when it is time to start accumulating the next group of 64 samples.
    3001 OCSM control register. When bit[0] of this register is cleared (by the DSP) then no data is passed out to the DAC. When it is high, then data is drawn from the accumulator cache and passed to the external DAC.
  • PCI Bus. The PCI bus is assigned a block of 256 I/O (byte wide) addresses through standard PCI plug and play circuitry. These addresses are used by the system's central processor as follows:
    I/O Address Range (hexadecimal) Data
    80 through 81 Internal bus address register. The system CPU is allowed access to the internal bus by setting up the 16-bit address in these two ports and writing or reading through the data ports below.
    84 through 87 Internal data bus port. Access (read and write) to the internal bus is allowed via this port with the internal address specified by 80-81 above.
  • Wavetable Cache RAM. Cache 22 preferably is a low-cost SRAM having a capacity of about 8 to 32 kilobytes. The available memory in cache 22 can be assigned to data sample storage, accumulator storage, and general storage for DSP 16. Figure 2 depicts how the available memory can be suitably assigned between data sample storage and accumulator storage. Data samples can be stored in data queues "A" and "B", while the digital audio signals generated by DSP 16 (generated data samples) can be stored in accumulator queues "A" and "B." See Figure 2. Data queues A and B can each store up to 64 16-bit data samples for each of 32 voices. Accumulator queues A and B each can accumulate the generated data samples for up to 32 voices. The generated data samples are accumulated together in queue A or B as one set of 16-bit data samples. There can be up to 64 data samples in a set.
  • Data queues A and B together can store up to 8 kilobytes, while accumulator queues A and B together can store up to 256 bytes. Additional memory can be provided in cache 22 for general DSP storage.
  • Preferably, one of the two data queues for a voice is used to store data samples as they are retrieved from system memory while the other data queue supplies data samples to the DSP 16. Thus, if data queue A supplies data samples to DSP 16, then data queue B stores data samples retrieved from system memory. Data queue B is filled with the next set of data samples to be processed by DSP 16, and must be filled before the DSP completes the processing of the data samples in data queue A. Otherwise, there will be undesirable gaps in the generated digital audio signals. When all the data samples in data queue A have been processed, data queues A and B are toggled, and DSP 16 processes the data samples stored in data queue B, and data samples retrieved from system memory are stored in data queue A. This process continues as long as DSP 16 processes data samples.
  • Similarly, one of the accumulator queues is used to supply accumulated data samples to an external DAC, while the other accumulator accumulates data samples generated by DSP 16. Thus, if accumulator queue A supplies accumulated data samples, then accumulator queue B accumulates data samples. The generated data samples for all of the active voices must be accumulated in accumulator queue B before all the data samples in accumulator queue A have been transmitted to the external DAC. Otherwise, there will be gaps in the analog signal. When all the data samples in accumulator queue A have been transmitted to the external DAC, accumulator queues A and B toggle, and data samples accumulated in data queue B are transmitted to the external DAC, and generated data samples are accumulated in data queue A. The overall operation of the present invention is further discussed below.
  • An address map for a wavetable cache suitable for the present invention is as follows:
    Address Range (hexadecimal) Data
    0000 through 001F Cache queue "A" for voice 0 of 31
    0020 through 003F Cache queue "B" for voice 0 of 31
    0040 through 005F Cache queue "A" for voice 1 of 31
    0060 through 007F Cache queue "B" for voice 1 of 31
    0080 through 07BF Caches for voices 2 through 30
    07C0 through 07DF Cache queue "A" for voice 31 of 31
    07E0 through 07FF Cache queue "B" for voice 31 of 31
    0800 through 083F Accumulator cache "A"
    0840 through 087F Accumulator cache "B"
    0880 through 1FFF General Storage for the DSP
  • The PCI Interface. Figure 3 sets forth a block diagram of PCI interface block 12. PCI interface block 12 includes PCI interface controller 34, buffers 36 and 38, internal bus address register 40, and PCI I/F block registers 42. As illustrated, PCI controller 34 is connected to the PCI address bus, buffers 36 and 38, internal bus address register 40, and PCI I/F block registers 42. Buffer 36 connects to the PCI data bus, PCI controller 34, internal bus address register 40, and the internal data bus, while buffer 38 connects to the PCI address bus, PCI controller 34, and PCI I/F block registers 42. Internal bus address register 40 connect to PCI controller 34, the internal address bus, the internal data bus, and buffer 36. Finally, PCI I/F block registers 42 are connected to buffer 38, PCI controller 34, and the internal data and address buses.
  • PCI I/F block registers 42 contain status and address information which indicates which voice requires additional data samples to be stored in cache and the address in system memory to obtain the data samples. A detailed description of these registers is set forth in the above address map for the internal bus. Internal bus address register 40 is used by the system central processor to access the PC audio circuit registers on the internal bus. For example, the central processor may need access to the PCI I/F block registers in order to write system memory addresses which indicate wavetable data storage locations. Internal bus address register 40 also stores the addresses of cache 22 at which wavetable data samples from system memory are stored. As set forth in the I/O address table above, the central processor accesses a register on the internal bus by writing its address, via buffer 36. in the internal bus address register 40. Read or write accesses to a particular register is provided through the port specified in the above table.
  • Based on the status information stored in PCI I/W block registers 42, PCI interface controller 34 detects when there is a need to update cache 22 with data samples and initiates bus master requests. Under the control of PCI interface controller 34, the addresses in system memory from which data samples are to be retrieved are sent from the PCI interface block registers 42, through buffer 38, to the PCI address bus. Retrieved data samples from system memory are sent on the PCI data bus to buffer 36. Under the control of PCI interface controller 34, data samples in buffer 36 are transmitted on the internal data bus to cache 22. The addresses in cache 22 for storing the data samples are contained in internal bus address register 40 and transmitted on the internal address bus. Preferably, PCI interface block 12 can request data samples for more than one active voice at a time.
  • PCI controller 34 calculates the cache addresses for storing the data samples by determining which voice is being updated, whether queue A or B is being updated, and which 32-bit word of the queue is being updated. PCI controller 34 contains thirty-two 5-bit counters-one for each voice-to determine which sample in the queue is the next to be updated by the PCI interface block 12. PCI interface block registers 42 include thirty-two 1-bit toggle registers-one for each voice-to indicate which queue each voice is currently using. These registers toggle each time a queue is filled by the PCI interface block 12. The PCI controller 34 stores the calculated cache addresses in the internal bus address register 40 and controls when they are output onto the internal address bus. The implementation details of PCI interface block 12 are within the level of skill possessed by those of ordinary skill in the art.
  • The Output Control State Machine. Figure 4 sets forth a block diagram of OCSM 18. As illustrated, OCSM 18 includes a control block 46, with buffer, address decode and control block 48, 7-bit counter 50, and 22.66 microsecond timer 54. As illustrated, control block 46 is connected to the internal address and data buses, the address decode and control block 48, the 7-bit counter 50, and FIFO 52. Address decode and control block 48 is connected to the internal address bus, control block 46, 7-bit counter 50, and timer 54. Seven-bit counter 50 is connected to address decode and control block 48, control block 46, and FIFO 52. The seven-bit counter is described in the above address map for the internal bus, and is referred to as the OCSM sample count register. FIFO 52 can store two data samples, one in a top location and the other in a bottom location, and is connected to control block 46, 7-bit counter 50, timer 54, and an external DAC. Timer 54 connects to address decode and control block 48, FIFO 52, and clock generator 30.
  • The DSP 16 enables OCSM 18 by writing to its control register. Once OCSM 18 is enabled, two data samples are transmitted, under the control of control block 46, on the internal data bus from an accumulator queue in cache 22, through the buffer in the control block 46, into FIFO 52. Every 22.66 microseconds, as indicated by timer 54, the FIFO 52 shifts the data sample in the bottom location to the top location, thereby enabling it to be output to the external DAC. The data sample previously in the top location is discarded. At the same time, another data sample is retrieved from cache 22 and stored in the bottom location of FIFO 52, and, under the control of address decode and control block 48, the 7-bit counter 50 is incremented. Address decode and control block 48 calculates the addresses of data samples to be retrieved from cache 22 from the 7-bit counter 50 and cache address information supplied on the internal address bus. These calculated addresses are sent to control block 46 where they are used to request specific data samples from cache 22. The implementation details of OCSM 18 are within the level of skill possessed by those of ordinary skill in the art. DSP 16 can observe bit 7 of counter 50 to determine when it is time to start accumulating the next group of data samples.
  • The Internal Bus Arbiter. Internal bus arbiter 20 is a simple arbiter that has a fixed priority for bus requests from: (i) the DSP 16 (lowest priority); (ii) the PCI bus interface block 12 (middle priority); and (iii) the OCSM 18 (highest priority). Arbiter 20 grants bus access to the requesting device having highest priority, at which point that device is free to drive the address bus and either the READ or WRITE signal. If the access is a read, then the priority device will capture or use the data from the data bus; if the access is a write, then the priority device will drive the data bus. The implementation details of arbiter 20 are within the level of skill possessed by those of ordinary skill in the art.
  • III. PC AUDIO SYSTEM OPERATION
  • At start-up of the preferred embodiment of PC audio circuit 10, 128 data samples are loaded into the cache 22 (64 data samples in each of data queues A and B) for each active voice. Assuming 32 active voices, this makes the worst case for required memory:
       (32 voices) (128 samples/voice) (2 bytes/sample) = 8 kilobytes
    Additionally, cache 22 requires accumulator queues A and B having capacity of 64 data samples each. Thus, the additional required memory is:
       (2 queues) (64 samples/queue) (2 bytes/sample) = 256 bytes
    As discussed above, additional memory may be provided for general DSP operations.
  • Once data queues A and B are loaded with data, the DSP 16 processes the data samples in one of the data queues, for the first active voice (e.g., voice 0). The other data queue is presently inactive. Then, DSP 16 processes the data samples for the next designated active voice (e.g., voice 1). As DSP 16 processes these data samples, the data samples just generated by DSP 16 are accumulated in one of the accumulator queues. This process continues until all active voices have been processed, and then the accumulator queues toggle and the other accumulator queue will accumulate generated data samples while the accumulated data samples in the first accumulator queue can be output to the external DAC.
  • Also, once the data samples for each active voice in the first data queue are processed, the PCI interface block 12 sends requests on the PCI bus for additional data samples from system memory. The data samples retrieved from system memory are stored in the first data queue, thereby writing over the data samples just processed. While these additional data samples are being retrieved, the DSP 16 processes the data samples in the other queue. Then, the data queues toggle, and the process continues, allowing up to 64 data samples to be processed at a time.
  • If DSP 16 processes the data samples at the same frequency as the sampling frequency used during analog-to-digital conversion (recording) of the original audio signal, then when the audio signals generated by the DSP are converted to analog and played, the resulting audio signal will sound the same (i.e., have the same frequency) as the original audio signal used to create the data samples. When the frequency of the audio signal being played is the same as the recording frequency, its frequency ratio (Fc) equals 1. For example, if a middle C note (middle C = 440 Hz) of a piano is recorded and F c= 1, then the audio signal generated or played will be at the same frequency and sound the same as the signal recorded. If Fc > 1, the generated audio signal will have a higher pitch. For Fc = 4, the generated audio signal is two octaves higher than the sampling frequency of the signal recorded.
  • If Fc = 1 for each of the active voices, then the maximum allowable PCI bus latency equals the time it takes to process 64 frames of data samples at the 44.1 KHz frame rate.
       64 frames x 1/44100 seconds = 1.45 milliseconds
    However, if Fc > 1 for one or more active voices, the maximum allowable PCI bus latency is reduced because DSP 16 processes more than one data sample per frame per voice. In other words, the data samples in a data queue for a particular voice are consumed faster than if Fc = 1. For example, for Fc = 2, DSP 16 skips every other data sample in the data queue. For Fc greater than about 2.0, the reduction in the maximum allowable bus latency may become a problem.
  • The latency problem for Fc > 2 can be avoided by having the PC audio circuit 10 retrieve only the data samples which will be processed and not the data samples which will be skipped by DSP 16. Thus, all the data samples retrieved and stored in a data queue of cache 22 will be processed. This feature is implemented by providing means in PCI interface block 12 for accessing the Fc values for the active voices, and then calculating the next system memory address for retrieving data for a given voice based on the current system memory address and the Fc value. For example, if Fc = 4 for a given active voice, then: next system memory address = current address + 4. The implementation details for this feature are within the level of skill possessed by those of ordinary skill in the art.
  • Retrieving select data samples for each active voice when Fc > 1 reduces the available PCI bandwidth since the burst mode cannot be used for transmitting the data samples. However, the percentage usage of the bandwidth is generally expected to be acceptable even without use of the burst mode.
  • When the PCI bus is in burst mode, it typically operates at 60 ns/32 bits and the required bandwidth is:
                  = 4.2% (Note: 2 voices = 32 bits.) A bandwidth usage of 4.2% is very acceptable. If the PCI bus is not in burst mode, it typically operates four times slower, and the bandwidth usage is 17%. A bandwidth usage of 17% may be acceptable but is less desirable and increases the risk that an excessive amount of the PCI bus bandwidth will be used.
  • IV. REDUCING PCI BUS BANDWIDTH REQUIREMENTS BY PROVIDING FREQUENCY COMPENSATED WAVETABLE DATA
  • As discussed above, when Fc > 2 for one or more active voices, bus latency problems may arise because DSP 16 skips data samples and consumes data samples faster than if Fc = 1. The bus latency problems can be avoided by having the PC audio circuit 10 retrieve only the data samples which will be processed and not the data samples which will be skipped. However, when only select data samples are retrieved from system memory, the burst mode cannot be used and this increases the percentage of PCI bus bandwidth usage. Therefore, in the preferred embodiment of the present invention, the PC audio system includes driver software which facilitates the creation of frequency compensated files or patches of wavetable data which are stored in system memory and can be transmitted to cache memory 22 in burst mode, thereby reducing the PCI bus bandwidth requirements. The frequency compensated files or patches contain only the data samples which will be actually processed by DSP 16 for a voice having Fc > 2. For example, for an active voice having Fc = 4, the DSP 16 only needs to process every fourth data sample in the patch (the "original patch") of wavetable data associated with this active voice. The driver software facilitates the creation of a frequency compensated version of the original patch, containing only every fourth sample. This frequency compensated file or patch is stored in system memory and can be transmitted in burst mode to PC audio circuit 10 for processing by DSP 16.
  • Figure 5 sets forth a block diagram of a PC audio system which can provide frequency compensated patches as described above. The PC audio system includes a PC audio circuit 10, of the type described above, driver software 62, and a MIDI or comparable file 64. PC audio circuit 10 is connected to system memory 60 through PCI bus 24. File 64 contains parameters that define the song or other audio signals to be generated by PC audio circuit 10. The driver software 62 performs the function of interpreting the parameters contained in file 64 and programming PC audio circuit 10 to generate the desired audio signals from wavetable data in system memory. As discussed below, the driver software 62 also contains instructions which control the function of providing frequency compensated patches for high Fc voices.
  • Upon execution of the instructions in driver software 62, the system CPU performs the following steps:
  • Step 1:
    Retrieve the parameters for a note or voice in file 64.
    Step 2:
    Calculate the voice's desired Fc from these parameters and the frequency of wavetable data in system memory associated with the voice.
    Step 3:
    If Fc > 2.0, derive a frequency compensated patch from the patch of wavetable data stored in system memory which is associated with the voice. The frequency compensated patch is stored in system memory.
    In steps 1 and 2, the system CPU determines for a given voice in file 64 whether the voice is to be played at a frequency higher than the frequency of the wavetable data in system memory for that voice. The ratio of the desired frequency to the frequency of the data in system memory determines the Fc for that voice. The CPU then compares the Fc value to 2.0. For Fc > 2.0, the CPU derives a frequency compensated patch as described below. Steps 1-3 are repeated for each voice in file 64.
  • A frequency compensated patch can be derived a number of ways. One way, which requires the least CPU processing, is to copy or transpose a fraction of the wavetable data samples from the original patch for the voice into a new file or patch stored in system memory. The fraction of data samples transposed to the new patch is based on the Fc value calculated by the system CPU. The frequency compensated patch has a frequency that is higher than the frequency of the original patch. For example, if every fourth data sample is copied from the original patch to create the frequency compensated patch, the frequency compensated patch has a frequency which is four times the frequency of the original patch; the frequency compensated patch has an effective frequency (Feff) equal to four.
  • Since bus latency problems occur when the PC audio circuit 10 generates audio signals at more than twice the recording frequency of a patch of data samples (Fc > 2), the goal when deriving frequency compensated patches is to provide an effective patch frequency which is high enough that the PC audio circuit 10 does not have to more than double the patch's frequency to generate the desired audio signals. For example, if Fc = 8, the system CPU should copy every fourth data sample into a new file or patch. The new file or patch is "frequency compensated" and has an effective frequency of four (Feff = 4). Thus, the PC audio circuit 10 can process this frequency compensated patch at only twice the frequency to provide audio signals with Fc = 8. If the PC audio circuit 10 instead processed the original patch of wavetable data, the PC audio circuit would have to process the data at eight times the recording frequency (Fc = 8) to generate the desired audio signals, thereby creating bus latency problems.
  • Another way of deriving frequency compensated patches is to digitally filter the original patch of wavetable data samples. Digitally filtering requires more CPU processing but is preferred over the above technique of transposing a fraction of the wavetable data samples in a patch. In the above technique, the high frequency component of a patch of wavetable data translates into noise in a frequency compensated patch. Digitally filtering removes the high frequency component and thus results in a frequency compensated patch providing cleaner sound. One example of a digital filtering technique is to take the average of every nth sample. Thus, for a voice with Fc = 8, the CPU could take the average of every fourth sample to derive the frequency compensated patch. Another example of a digital filtering technique, which requires even more CPU processing power, is to calculate the average of the moving average.
  • The system CPU can be directed by the driver software 62 to either: (1) derive all of the required frequency compensated patches just prior to the PC audio circuit's 10 processing of MIDI file 64; or (2) derive each patch as the PC audio circuit 10 processes through file 64. The former technique requires more memory since all the frequency compensated patches are derived and stored in system memory prior to processing by the PC audio circuit 10. The former technique also causes a delay before the MIDI file 64 can be played. The later technique is preferred if the CPU has enough power and excess time to perform the necessary calculations as file 64 is processed.
  • When PC audio circuit 10 processes a frequency compensated patch, adjustments must be made to account for the higher frequency of the patch. For example, if Fc = 8 for a voice and Feff = 4 for the frequency compensated patch, the PC audio circuit 10 only needs to process the patch at twice the frequency rather than eight times the frequency to generate digital audio signals having Fc = 8. The driver software 64 programs the PC audio circuit 10 to make these adjustments. Thus, if PC audio circuit 10 is originally programmed to generate digital audio signals at Fc = 8, but the PC audio circuit processes a frequency compensated patch with Feff = 4, the PC audio circuit is then programmed to divide the frequency ratio for processing the data by four such that the data is processed with Fc = 2. Since it is easier to divide by a factor of two in digital circuitry, the frequency compensated files preferably should have an effective frequency which is a factor of two.
  • Although the above discussion focuses on the use of frequency compensated patches to reduce PCI bus bandwidth requirements, frequency compensated patches have application in PC audio systems which obtain wavetable data from local rather than system memory. As discussed above, digitally filtering patches of wavetable data filters out high frequencies which translate into noise when the patch is played at a frequency higher than the recording frequency (e.g., Fc > 2). Wavetable data, stored in a local memory and to be used by a wavetable synthesizer to generate digital audio signals with a high frequency ratio, can be preprocessed, through the above digital filtering techniques, such that the synthesizer generates clean digital audio signals.
  • The present invention, therefore, is well adapted to carry out the objects and attain the ends and advantages mentioned herein as well as other ends and advantages made apparent from the disclosure. While preferred embodiments of the invention have been described for the purpose of disclosure, numerous changes and modifications to those embodiments described herein will be readily apparent to those skilled in the art and are encompassed within the scope of the following claims.

Claims (6)

  1. A method of providing a frequency compensated version of a first patch of wavetable data having a first sample frequency and stored in a first location of a memory, wherein said frequency compensated patch is stored in said memory and either said first patch or said frequency compensated patch is accessed from said memory by a digital wavetable audio synthesizer and used to generate digital audio signals having a second sample frequency which is higher than said first sample frequency, said method comprising the steps of:
    (a) accessing said first patch of wavetable data from said first location of said memory;
    (b) deriving, from said first patch of wavetable data, a patch of wavetable data which has a third sample frequency greater than said first sample frequency, wherein said derived patch of wavetable data comprises said frequency compensated patch; and
    (c) storing said frequency compensated patch in a second location of said memory for use by said digital audio synthesizer in generating digital audio signals having said second sample frequency.
  2. A method as claimed in claim 1, wherein said frequency compensated patch comprises a digitally filtered version of said first patch of wavetable data.
  3. A method as claimed in claim 1, wherein said frequency compensated patch comprises a fraction of data samples from said first patch of wavetable data.
  4. A method as claimed in claim 1, 2 or 3, further comprising the step of calculating a ratio of said second sample frequency to said first sample frequency, and wherein said frequency compensated patch is derived only if said ratio is greater than a predetermined value.
  5. A method as claimed in claim 4, wherein said predetermined value is greater than about 2.0.
  6. A method as claimed in claim 4, wherein the ratio of said second sample frequency to said third sample frequency of said frequency compensated patch is less than or equal to about 2.0.
EP97907795A 1996-02-21 1997-02-21 Pc audio system with frequency compensated wavetable data Expired - Lifetime EP0882286B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US60455896A 1996-02-21 1996-02-21
PCT/US1997/002811 WO1997031363A1 (en) 1996-02-21 1997-02-21 Pc audio system with frequency compensated wavetable data
US604558 2003-07-30

Publications (2)

Publication Number Publication Date
EP0882286A1 EP0882286A1 (en) 1998-12-09
EP0882286B1 true EP0882286B1 (en) 2000-06-21

Family

ID=24420092

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97907795A Expired - Lifetime EP0882286B1 (en) 1996-02-21 1997-02-21 Pc audio system with frequency compensated wavetable data

Country Status (5)

Country Link
EP (1) EP0882286B1 (en)
JP (1) JP2000505566A (en)
KR (1) KR100384685B1 (en)
DE (1) DE69702336T2 (en)
WO (1) WO1997031363A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6366971B1 (en) 1998-01-09 2002-04-02 Yamaha Corporation Audio system for playback of waveform sample data
US6180864B1 (en) 1998-05-14 2001-01-30 Sony Computer Entertainment Inc. Tone generation device and method, and distribution medium
DE10219357B4 (en) 2002-04-30 2004-03-11 Advanced Micro Devices, Inc., Sunnyvale Improved data transfer in audio codec controllers

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6029794A (en) * 1983-07-29 1985-02-15 ヤマハ株式会社 Electronic musical instrument
JP2623942B2 (en) * 1990-09-05 1997-06-25 ヤマハ株式会社 Music signal generator
GB9103239D0 (en) * 1991-02-15 1991-04-03 Kemp Michael J Improvements relating to data storage techniques

Also Published As

Publication number Publication date
EP0882286A1 (en) 1998-12-09
DE69702336D1 (en) 2000-07-27
KR100384685B1 (en) 2003-08-14
KR19990087044A (en) 1999-12-15
DE69702336T2 (en) 2001-02-15
JP2000505566A (en) 2000-05-09
WO1997031363A1 (en) 1997-08-28

Similar Documents

Publication Publication Date Title
US6100461A (en) Wavetable cache using simplified looping
US5689080A (en) Computer system and method for performing wavetable music synthesis which stores wavetable data in system memory which minimizes audio infidelity due to wavetable data access latency
US5717154A (en) Computer system and method for performing wavetable music synthesis which stores wavetable data in system memory employing a high priority I/O bus request mechanism for improved audio fidelity
US5847304A (en) PC audio system with frequency compensated wavetable data
US6292854B1 (en) Method and apparatus for providing high quality audio in a computer system
JP2970689B2 (en) Audio synthesizer
US5753841A (en) PC audio system with wavetable cache
US7381879B2 (en) Sound waveform synthesizer
US6137046A (en) Tone generator device using waveform data memory provided separately therefrom
US5763801A (en) Computer system and method for performing wavetable music synthesis which stores wavetable data in system memory
US5809342A (en) Computer system and method for generating delay-based audio effects in a wavetable music synthesizer which stores wavetable data in system memory
JP2970526B2 (en) Sound source system using computer software
EP0882286B1 (en) Pc audio system with frequency compensated wavetable data
JP3163984B2 (en) Music generator
WO1996018995A1 (en) Pc audio system with wavetable cache
CA2305523A1 (en) Sound-producing integrated circuit with virtual cache
US5548655A (en) Sound processing apparatus
EP1024475B1 (en) Musical sound generating device and method, providing medium, and data recording medium
KR20010021575A (en) Tone Generation Device And Method
US5956680A (en) Virtual audio generation and capture in a computer
JPH02187796A (en) Real time digital addition synthesizer
US7561931B1 (en) Sound processor
JP7159583B2 (en) Musical sound generating device, musical sound generating method, musical sound generating program, and electronic musical instrument
JP3339372B2 (en) Storage medium storing program for realizing musical sound generating apparatus and musical sound generating method
JP2709965B2 (en) Music transmission / reproduction system used for BGM reproduction

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19980805

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB IT

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

17Q First examination report despatched

Effective date: 19990927

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRE;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.SCRIBED TIME-LIMIT

Effective date: 20000621

REF Corresponds to:

Ref document number: 69702336

Country of ref document: DE

Date of ref document: 20000727

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20070105

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20070228

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20070201

Year of fee payment: 11

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20080221

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20081031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080221