DIGITAL LOUDSPEAKER SYSTEM FIELD OF THE INVENTION This invention relates to a device including an array of electro-acoustic transducers able to produce for each such input channel, an independently steerable and focusable beam of audible sound, at a level suitable for home entertainment or professional sound reinforcement applications.
BACKGROUND OF THE INVENTION The commonly-owned published International Patent application No WO- 0123104 describes an array of transducers and their use to achieve a variety of effects. The application describes a method and apparatus for taking an input signal, replicating it a number of times and modifying each of the replicas before routing them to respective output transducers such that a desired sound field is created. This sound field may comprise a directed beam, focussed beam or a simulated origin.
SUMMARY OF THE INVENTION The digital loudspeaker system of this invention is a digital electro-acoustic device able to accept as input, digital data samples representing one or more channels of digitized audio sound material, and able to produce for each such input channel, an independently steerable and focusable beam of audible sound, at a level suitable for home entertainment or professional sound reinforcement applications including elements as described in the appended claims
The number of beams of such a system preferably exceeds four to provide a full surround-sound system, including left, centre and right channel and one or more rear channels.
In a preferred embodiment, audio signals are filtered to compensate for the array behaviour and the full transfer function of the transducers that output the beams of sound.
In another preferred embodiment, the low frequency content of the input
channels is "stripped" from the input channels and combined into a non-steered channel. In another preferred embodiment, the input signals are upsampled in stages to a higher sample rate.
In another preferred embodiment, the driver stage and the transducers are directly coupled with no additional electronic low-pass filter.
In another preferred embodiment, the loudspeaker system is capable of converting beam-steering input parameters in real-time into delay times for the output signals.
These and other aspects of inventions will be apparent from the following detailed description of non-limitative examples and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram of a first variant of a digital loudspeaker system in accordance with the invention, with Fig.lA showing the master signal processing system and Fig. IB one of multiple slave circuits each driving up to 16 transducers; and Fig. 2 is a block diagram of a second variant of a digital loudspeaker system in accordance with the invention with Fig 2A showing the master signal processing system and Fig. 2B one of 26 identical slave circuits each driving 10 transducers.
DETAILED DESCRIPTION In the following the digital loudspeaker system is referred to as digital Sound Projector (dSP).
In a first implementation of the dSP, as illustrated in Fig. 1A and IB, common- format audio source material in Pulse Code Modulated (PCM) form is received by the dSP as either an optical or coaxial digital data stream 101 in the S/PDIF format. Other input digital data formats are possible as well. This input data may contain either a simple two channel stereo pair, or a compressed and encoded multi-channel soundtrack such as Dolby Digital™ 5.1 or DTS™, or multiple discrete digital channels of audio information. Encoded and/or compressed multi-channel inputs 101 are first decoded 102
and/or decompressed using proprietary DSP devices and licensed firmware. An analogue to digital converter is also incorporated to allow connection to analogue input sources which are immediately conerted to a suitably sampled digital format. The resultant output is typically three, four or more pairs of channels. The dSP comprises of one or more multi-channel digital audio inputs and/or two or more single channel digital audio inputs 101, a digital electronic processing system with one or more internal system-clocks 103, 305, driving an array of more than five acoustic output-transducers, the whole being capable of producing two or more independently steerable beams of sound, the sound in each beam independently corresponding to the content of one channel only or a combination of two or more of the channels of the digital audio inputs, characterised in that the acoustic output transducers are each driven by linear drivers, such as a digital pulse width modulator stage.
These channel-pairs are each fed into a two-channel sample-rate-converter [SRC] 103 (alternatively each channel can be passed through a single channel SRC) for re- synchronisation and re-sampling to an internal (or optionally, external) standard sample- rate clock [SSC] (typically about 48.8KHz or 97.6KHz) and bit-length (typically 24 bit), allowing the internal dSP system-clocks to be independent of the source data-clock. This sample rate conversion is important as it eliminates problems due to clock speed inaccuracy, clock drift, and clock incompatibility. Specifically, if the final power-output stages of the dSP are to be digital pulse-width-modulation [PWM] switched types for high efficiency, then there needs to be complete synchronisation between the PWM- clock and the digital data-clock feeding the PWM modulators. The SRCs provide this synchronisation, as well as isolation from the vagaries of any extemal data clocks. Finally, where two or more of the digital input channels have different data-clocks (perhaps because they come from separate digital microphone systems e.g.), then again the SRCs ensure that internally all these disparate signals are synchronised. The outputs of the SRCs are combined in a single high-speed serial signal comprising all six
channels (for the case of stereo input only two of these will contain valid data).
The dSP has a digital audio input signal that is resynchronised to an independent internal (or extemal) clock by means of a sample rate converter.
The dSP has two or more mutually synchronised or unsynchronised digital audio input signals that are re-synchronised to one and the same independent internal (or extemal) clock by means of one or more sample rate converters.
A different clock synchronisation strategy that may usefully be followed, as an alternative to the use of an internal clock signal generation such as the SRCs above, is to regenerate an internal master clock [IMC] by the use of a voltage-controlled-oscillator [VCO] and a phase-detector [PD] to form a phase-lock-loop [PLL], whereby the VCO frequency (possibly first divided by digital dividers) is compared in the PD with the frequency of the digital audio input signal sample clock (also possibly first divided by additional digital dividers), and the PD output voltage applied to the control input of the VCO (optionally via a loop control filter), so as to bring the frequency and phase of the VCO into a locked state with respect to the input signal sample clock. Some or all of the other internal dSP clocks may then be derived directly from the master VCO clock output, and in particular, the PWM master clock may be so derived, to ensure internal system stability and consistency. The internal clocks so derived, may however, be locked to only one of the possibly several external digital audio sample rate clocks. Thus this clock synchronisation method is less versatile than the previously described SRC method.
The dSP has one or more of the internal digital system-clocks that are synchronised to the frequency and phase of one of the digital audio input signals by means of a phase-lock- loop and voltage controlled oscillator, with one or more optional digital dividers to produce the required clock rates.
One or more (typically two or three) digital signal processor [DSP] units 106, 107, 108 are used to process the data. These may be e.g. Texas Instruments
TMS320C6701 DSPs running at 133MHz, and the DSPs either perform the majority of calculations in floating-point format for ease of coding, or in fixed-point format for maximum processing speed. Alternatively, especially where fixed-point calculations are being performed, the digital signal processing can be carried out in one or more Field Programmable Gate Array (FPGA) units. A further alternative is a mixture of DSPs and FPGAs. Some or all of the signal processing may alternatively be implemented with customised silicon in the form of an Application Specific Integrated Circuit (ASIC). At the point in the processing chain where the digital signals are converted to the voltage-, current- and power-levels necessary to properly drive the electro-acoustic output- transducers, there are cost advantages, and advantages of simplicity of silicon processing, in separating out the higher power stages into either discrete components (typically wherein the active components are field-effect transistors or bipolar transistors) or power-integrated circuits (as opposed to the common and very low-cost CMOS pure digital logic process devices most commonly used for signal processing). A microprocessor is also useful, though not essential, for system initialisation and setup, providing user interface, code loading to DSPs and FPGAs, handling remote-control interfacing, and communications with external devices such as engineering and analysis computer systems. Thus an optimum implementation could be to perform all digital signal processing in DSP or FPGA (or a mixture of these), and to perform all power driving at the outputs, with discrete-devices, or power-integrated-circuits and in the latter case to have multiple output-power-channels integrated on the same piece of silicon for low-cost and high-density, and in either case for a microprocessor to handle housekeeping tasks. In the description that follows, a multiple-DSP plus multiple-FPGA implementation, with discrete-component PWM power-stages is described. The dSP is a system wherein the digital processing necessary to convert digital
PCM audio input signals to electro-acoustic output-transducer drive-signals, is principally implemented using: one or more digital signal processors; or, one or more
field programmable gate arrays; or, one or more custom-designed silicon devices being CMOS or power-integrated-circuits, or a combination of both (application specific integrated circuits); or, a set of discrete electronic components where the active devices are bipolar transistors or field-effect transistors; or, with any combination of these devices; and in any of these cases with an optional microprocessor.
The dSP is a system wherein the digital processing necessary to convert digital PCM audio input signals to electro-acoustic output-transducer drive-signals, is principally implemented using: one to three digital signal processors and one to fifteen field programmable gate arrays and a set of discrete electronic components where the active devices are bipolar transistors or field-effect transistors.
A first DSP 106 performs filtering of the digital audio data input signals to compensate for the irregularities in the frequency response (i.e. transfer function) of the acoustic output-transducers used in the final stage of the dSP, including off-axis or non- polar components of the frequency response. Using this filtering the array behaviour can be better controlled when sound beams are steered to off-axis angles It also performs a simple low pass filter [LPF] function to remove any very high frequency components, preventing them from being aliased back by the sampling process. Generally, the compensation can also be used to compensate for other effects, such as room characteristics. The dSP is a system wherein digital filtering is performed to correct for imperfections in the transfer function of the acoustic output- transducers.
The digital signals are then four-times over-sampled, and preferably interpolated, by another DSP 106. Over-sampling by a factor other than four can of course be performed instead, and a factor of between two, and at least five-hundred-and-twelve, are possible and useful. Over-sampling factors in the range of four to thirty-two are however most practical in the dSP.
The dSP is a system wherein one or more of the digital audio data input channels
is over-sampled and optionally interpolated.
The number of separately processed channels may optionally, at this stage (preferably) or possibly at an earlier or later stage of processing, be reduced to for example five, by combining additively the (one or more) low-frequency-effects [LFE] channel with one or more of the other channels, for example the centre channel, in order to minimise the processing beyond this stage. However, if a separate sub-woofer is to be used with the system or if processing power is not an issue, then the six or more discrete channels may be maintained throughout the processing chain.
The dSP is a system wherein one or more digital input data channels representing low-frequency only signals are combined additively with one or more of the other channels and eliminated as separate channels from then on in the processing chain.
The separation of the low- frequency content can be applied to all channels using low pass filtering in the signal processing stage. The thus separated low-frequency content of the signal channels below for example 300Hz can then be advantageously added to form a low- frequency channel. This low- frequency channel is then added to or directed into a non-steered channel for output, e.g., the centre channel or a sub-woofer output. It should be noted that the cut-off frequency of 300Hz is chosen by a subjective optimisation process and other suitable cut-off values can be chosen to lie anywhere within a frequency range of 100Hz to 600Hz. A second DSP 107 functions as a noise-shaping quantizer [QNS] which performs quantization of the digital audio data independently for each channel, and applies quantizing-noise spectrum-shaping, to reduce the digital audio data word-length, to e.g. nine bits, at a sample rate of four (or perhaps two, eight, or even sixteen or more) times the SSC, e.g. ~195.2kHz or 390.4KHz, whilst maintaining a high signal-to-noise-ratio [SNR] within the audible band (i.e. the signal frequency band from ~20Hz to ~20KHz). Here as below the "tilde" (~) sign indicates an approximate value. A useful implementation practice is to make the SSC an exact rational number fraction, I/J where
I and J are mutually prime integers, of the DSP master-processing-clock speed, e.g. 133MHz x (16 / 5451) = 390,387.085Hz, which locks sample data rates throughout the system to the processing clocks. It is advantageous to make the digital PWM timing clock frequency also an exact rational number fraction of the DSP master-processing- clock speed. It is specifically advantageous to make the PWM clock frequency an exact integer multiple of the internal digital audio sample data rate, e.g. 512 times the sample rate for 9-bit PWM (because 29 = 512). The reduction of the digital data word-length to, e.g. 9 bits, while simultaneously increasing the sample-rate (e.g. 4 times) is useful for several reasons: (i) the next stage of digital processing in the dSP involves applying (many) multiple channels of digital delay, which requires memory elements for implementation. By reducing the word-width, these memory elements can be smaller and/or cheaper; (ii) the increased sample-rate allows finer resolution of data-word delays; e.g. at 48KHz data-rate, the smallest delay increment available is 1 sample period, or
~21 microseconds, whereas at 195KHz data-rate, the smallest delay increment available is (1 sample period) ~5.1 microsecondss. It is important to have sound- path-length compensation resolution (= time-delay resolution times speed-of- sound) fine compared to acoustic output-transducer diameter. In 21 microseconds sound in air at NTP travels approximately 7mm, which is too coarse a resolution when using transducers as small as 10mm diameter, (iii) it is easier to convert PCM data directly to digital PWM at practical clock-speeds when the word-length is small; e.g. 16-bit words at 48KHz data-rate require a PWM clock speed of 65536 x 48KHz ~ 3.15GHz (largely impractical), whereas 9-bit words at 195KHz data-rate require a PWM clock speed of 512 x 195KHz ~
99.8MHz (quite practical), (iv) because of the increased sample rate, there is an increased available signal
bandwidth <~=/τ the sample rate, so e.g. available signal bandwidth ~96KHz for a sample rate of ~195KHz); the quantization process (reduction in number of bits) effectively adds quantization noise to the digital data; by spectrally shaping the noise produced by the quantization process, it can be predominantly moved to the frequencies above the baseband signal (i.e. in our case above ~20KHz), in the region between the top of the baseband (>20KHz) and the available signal bandwidth (~ 96KHz); the effect is that nearly all of the original signal information is now carried in a digital data stream of e.g. 9bits width at sample rate of e.g. 195KHz, with very little loss in SNR. The dSP is a system with a noise-shaping quantizer (that reduces the digital audio data word-width whilst maintaining or nearly maintaining the signal to noise ratio by confining or largely confining the additional quantization noise generated by the quantizer to a region of the frequency spectrum outside that of the spectrum of the input digital audio). The dSP is a system wherein the internal digital audio data sample clock-rate is an exact integral fraction I/J (greater or less than one, where I & J are integers) of the signal processor logic clock rate.
The dSP is a system wherein the internal digital PWM timing clock-rate is an exact integral fraction I/J (greater or less than one, where I & J are integers) of the signal processor logic clock rate.
The dSP is a system wherein the internal digital PWM timing clock-rate is an exact integral multiple of the internal digital audio data sample clock-rate.
The dSP is a system wherein the internal digital PWM timing sample clock-rate is an exact integral fraction I/J (greater or less than one, where I & J are integers) of the digital audio data sample clock-rate.
The dSP is a system wherein the individual channels are digitally delayed in word-length reduced form.
The dSP is a system wherein the individual channels are digitally delayed in time increments smaller than the interval between successive digital-audio-input data samples.
The dSP is a system wherein pulse code modulated digital audio data samples are directly digitally converted into pulse width modulated form.
The output from the QNS is distributed in parallel to eleven Xilinx XCV200 FPGAs 108 where a unique digital (time-quantised) delay is applied for each channel and for each acoustic output transducer 113. In a particular implementation of the dSP with 264 acoustic output transducers arranged in a triangular array of roughly rectangular extent with one axis of the array vertical (and of extent 22 vertical columns of 12 transducers each) and with every second output transducer in each vertical column of transducers connected electrically in series or in parallel with the transducer immediately below it, this results in one hundred and thirty two (132) different versions of each of the five channels, six hundred and sixty channels in total. A transducer diameter small enough to ensure approximately omnidirectional radiation from the transducer up to high audio frequencies (e.g. > 12KHz to 15KHz) is important if the dSP is to be able to steer beams of sound at small angles from the plane of the transducer array. Thus a transducer diameter of between 5mm and 30mm is optimum for whole audio-band coverage. A transducer-to-transducer spacing small compared with the shortest wavelengths of sound to emitted by the dSP is desirable to minimise the generation of "spurious" sidelobes of acoustic radiation (i.e. beams of acoustic energy produced inadvertently and not emitted in the desired direction(s)). Practical considerations on possible transducer size dictate that transducer spacing in the range 5mm to 45mm is best. A triangular array layout is also best for high-areal-packing density of transducers in the array.
The dSP is a system wherein the acoustic output transducers are arranged in of roughly rectangular extent area with transducers arranged in a triangular or hexagonal
manner to achieve a good coverage of the area. The transducer diameters and their mutual spacing may be in the range of 5mm to 80 mm. The transducer diameter could be in the range 5mm to 30mm and transducer-transducer spacings in the range 5mm to 45mm, both preferably being in the range of 25 mm to 45 mm. The dSP is a system wherein the acoustic output transducers are arranged in an array of roughly rectangular extent with successive vertical pairs of adjacent transducers electrically connected in series or parallel.
The five channels for each of the 132 different transducer-pairs are summed in the FPGAs producing a single signal for each of the 132 series-or-parallel-connected transducer-pairs; each of these 132 sum-signals are passed to a digital PWM generator. The dSP is a system wherein pulse code modulated digital audio data samples representing each of several digital audio input channels are each separately digitally delayed by a possibly unique amount for each of very many acoustic output transducers or output transducer groupings (e.g. pairs) and wherein the several different delayed digital audio data sample streams representing the several inputs for each transducer or transducer group are digitally summed in an FPGA before application to the PWM output driver for that transducer or transducer grouping.
Each PWM generator 108 drives a class-D power switch or output stage 112 which directly drives one transducer 113, or a series-or-parallel-connected pair of adjacent transducers. The supply voltage to the class-D power switches 112 can be digitally adjusted to control the output power level to the transducers. By controlling this supply voltage over a wide range, e.g. 10:1, the power to the transducer can be controlled over a much wider range, 100:1 for a 10:1 voltage range, or in general N2:l for an N:l voltage range. Thus wide ranging level control (or "volume" control) can be achieved with no reduction in digital word length, so no degradation of the signal due to further quantization (or loss of resolution) occurs. The supply voltage variation is performed by low-loss switching regulators 110 mounted on the same printed circuit
boards (PCBs) as the class-D power switches. There is one switching regulator for each class-D switch to minimise power supply line inter-modulation. To reduce cost, each switching regulator can be used to supply pairs, triplets, quads or other integer multiples of class-D power switches. The dSP is a system wherein switching regulators, themselves under the control of the dSP volume-adjustment system, are used to control the supply voltage to the class-D output power switches (that drive the acoustic output transducers) and thus control the acoustic output volume with no loss of digital resolution.
The class-D power switches or output stages 112, directly drive the acoustic output transducers. In normal class-D power amplifier drives, i.e. the very commonly used so-called "class- AD" amplifiers, it is necessary to place an electronic low-pass- filter [LPF] (invariably, an analogue electronic LPF) between the class-D power stage and the transducer. This is because the common forms of magnetic transducer (and even more so, piezoelectric transducers) present a low load-impedance to the high-frequency PWM carrier frequencies present at high energy in class- AD amplifier outputs. E.g. a class- AD amplifier with zero baseband input signal continues to produce at its output, a full amplitude (usually bipolar) 1 : 1 mark-space-ratio [MSR] output signal at the PWM switching frequency (in the present case this would be at -50 or 100MHz), which if connected across a nominal 8Ohm load would dissipate full available power in that load, whilst creating no useful acoustic output signal. The commonly used electronic LPF has a cut off frequency above the highest wanted signal output frequency (e.g. > 20KHz) but well below the PWM switching frequency ( e.g. ~50MHz), thus effectively blocking the PWM carrier and minimising the wasted power. Such LPFs have to transmit the full signal power to the electrical loads (e.g. the acoustic transducers) with as low power-loss as possible; usually these LPFs use a minimum of two power-inductors and two, or more usually, three capacitors; the LPFs are bulky and relatively expensive to build. In single- channel (or few-channel) amplifiers, such LPFs can be tolerated on cost grounds, and
most importantly, in PWM amplifiers housed separately from their loads (e.g. conventional loudspeakers) which need to be connected by potentially long leads to their loads, such LPFs are in any case necessary for quite different reasons, viz. to prevent the high-frequency PWM carrier getting into the connecting leads where it will most likely cause unwanted stray electromagnetic radiation [EMI] of relatively high amplitude. In the dSP, the acoustic transducers are connected directly to the physically adjacent PWM power switches by short leads and all are housed within the same enclosure, eliminating the problems of EMI. In the dSP, the PWM generators are of a type known as class-BD; these produce class-BD PWM signals which drive the output power switches and these in turn drive the acoustic output transducers. Class-BD PWM output signals have the property that they return to zero between the full amplitude bipolar pulse outputs, and thus are tristate, not bistate like class- AD signals. Thus, when the digital input signal to a class-BD PWM system is zero, then the class-BD power output state is zero, and not a full-power bipolar 1 : 1 MSR signal as is produced by class- AD PWM. Thus the class-BD PWM power switch delivers zero power to the load (the acoustic transducer) in this state: no LPF is required as there is no full-power PWM carrier signal to block. Thus in the dSP, by using an array of class-BD PWM amplifiers to drive directly an integral array of transducers, a great saving in cost, and lost power, is achieved, by eliminating the need for an array of power LPFs. Class-BD is rarely used in conventional audio amplifiers, firstly because it is more difficult to make a very high linearity class-BD amplifier, than a similarly linear class- AD amplifier; and secondly because for the reasons stated above an LPF is generally required anyway, for EMI considerations, thus negating the principal benefits of class-BD.
The dSP is a system wherein the PWM generators for each acoustic output transducer driver stage are of class-BD, i.e. return-to-zero form, rather than the more common class- AD non-return-to-zero form.
The dSP is a system wherein each output digital power switch driving an
acoustic output transducer is coupled directly to that transducer with no intervening electronic low-pass-filter.
The dSP is a system wherein the electro-acoustical low-pass-filter response of the acoustic output transducers, alone, is used to minimise the emission of acoustical components significantly higher in frequency than the maximum signal frequency in the digital audio input channel.
The third DSP 108 in the system is used to calculate the required delay for each channel on each transducer to create the required steering effect. Given that the dSP is able to independently steer each of the output channels (one steered output channel for each input channel, typically 4 to 6), there are a large number of separate delay computations to be performed; this number is equal to the number of output channels times the number of transducers. As the dSP is also able to dynamically steer each beam in real-time, then the computations also need to be performed quickly. Once computed, the delay requirements are distributed to the FPGAs 108 (where the delays are actually applied to each of the streams of digital data samples) over the same parallel bus as the digital data samples themselves.
The dSP is a system wherein a digital signal processor is used to calculate in real-time the magnitude of the time delays needed in each of the input-channel/output- transducer combinations in order to correctly steer each of the separate multiple steerable output sound beams.
System initialisation is under the control of an 8051 -based micro-controller 103. Once initialised the micro-controller is used to monitor and accept beam-direction and audio-volume-level adjustment commands from the (human) user via an input device, e.g. an infrared remote controller, display them on a system visual-display panel, and pass them to the time-delay-computation system 108, e.g. as described above implemented in the third DSP.
The dSP is a system wherein a processor monitors beam-direction control signals
from user inputs and directs them to a real-time beam-steering computational processor.
In an alternative implementation of the dSP with a modified architecture, as illustrated in Figs. 2A and 2B, common-format audio source material in Pulse Code Modulated (PCM) form is received by the dSP as either an optical or coaxial digital data stream 301 in the S/PDIF format as before, together with one or more analogue input signals (e.g. from microphones) which are first fed through analogue to digital converters. After decoding 302 and decompression where required, and clock synchronisation 304 by sample-rate-conversion 305 or PLL, as previously described, the digital audio PCM data has been converted to 8 channels of 24bit words at an internally generated sample rate of 48.8KHz. The first DSP (or a pair of DSPs 306, 307 as shown in Fig. 2) is used to run precision filters for enhanced frequency response equalisation. This processor also performs two-times oversampling and interpolation creating 8 channels of 24-bit word output samples at 97.6KHz. The output of this DSP is fed to another DSP that performs anti-alias and tone control filtering on all eight channels, and a further four-times up-sampling and interpolation to an overall eight-times oversampled data rate. This DSP 307 also handles signal limiting. Digital volume-control is performed in this DSP too. An ARM microprocessor 303 generates timing delay data for each and every transducer, from real-time beam-steering settings sent by the user to the dSP via infrared remote control. The ARM core 303 also handles all system initialisation and external communications.
The dSP is a system wherein the digital audio data sample streams are over- sampled and interpolated to a higher data rate in more than one stage, preferably first in a 2x stage followed by a 4x stage.
The dSP is a system wherein the digital audio data sample streams are subjected to digital tone-control filtering.
The dSP is a system wherein the digital audio data sample streams are subjected to digital volume-control processing.
The dSP is a system wherein a remote control operated by a user provides realtime beam-steering instructions.
FPGA logic 308 controls high-speed static RAM devices 309 to produce the required delays applied to the digital audio data samples of each of the eight channels, with a discretely delayed version of each channel being produced for each and every one of the output transducers (260 in this implementation). Apodisation, or array aperture windowing (i.e. graded weighting factors are applied to the signals for each transducer, as a function of each transducer's distance from the centre of the array, to control beam shape) is applied separately in the FPGA 308 to each channel's delayed signal versions. Applying apodisation here allows different output sound beams to have differently tailored beam-shapes. These separately delayed and separately windowed digital sample streams, one for each of 8 channels and for each of 260 transducers making 8 x 260 = 2080 delayed versions in total, are then summed in the FPGA for each transducer to create an individual 390kHz 24-bit signal for each of the 260 transducer elements. The apodisation or array aperture windowing, may optionally be performed after the summing stage for all of the channels at once (instead of for each channel separately, prior to the summing stage) for simplicity, but in this case each sound beam output from the dSP will have the same window function which may not be optimal. These two hundred and sixty 390kHz 24-bit signals are then each passed through a quantising/noise shaping circuit also in the FPGA 308 to reduce the data sample word lengths to 8 bits at 390kHz. Applying the quantisation after the delay and summation stages results in greater buffer memory requirements in the delay stage due to the longer word lengths to be buffered: however, this disadvantage is offset by the wider dynamic range made available by enabling the summation operation to be carried out on 24bit words rather than, say, 8bit words. The transducer signals are finally grouped into blocks and sent off the FPGA chip 308 via very high speed serial signalling to driver circuit boards of Fig. 2B.
The dSP is a system wherein apodisation or aperture windowing is applied to the ensemble of delayed signals from each input channel prior to summing with signals representing other input channels, and destined for each of the transducers in the array. The dSP is a system wherein the set of delayed, windowed signals for each transducer from each input channel are summed prior to noise shaping and quantisation. The dSP is a system wherein apodisation or aperture windowing is applied to the ensemble of summed delayed signals representing all input channels, and destined for each of the transducers in the array.
The dSP is a system wherein quantisation and noise shaping is applied to the digital audio data samples representing all of the channels (i.e. after the channels have been summed) and after they have been separately delayed.
The apodisation or array aperture windowing function can alternatively be performed by suitably controlling the regulators 310 that drive the PWM ouput-power- switches 312. In this case it is necessary that the power supply rails 311 to the transducer driver circuits can be individually controlled. By setting the supply rails to different output-power-switches to different (DC) levels the effective transducer gains can be varied independently. In this way any desired aperture windowing function can be imposed on the array. However, as with the digital aperture windowing performed after the channel summation operation, described earlier, this method of windowing necessarily applies the same window function to all channels and to all output sound beams, and so is less flexible.
The dSP is a system wherein apodisation or aperture windowing is applied by means of independently setting the supply rails to individual transducer output-power- switches to appropriate levels. The driver circuit boards 312 which are preferably physically local to the transducers 313 they drive, provide a pulse-width-modulated class-BD output driver circuit for each of the transducers they control. The transducers are then directly
connected to the output of the class-BD output driver circuits 312 without any intervening LPF.
The dSP user-interface produces overlay graphics 314 for on-screen display of setup, status and control information, on any suitably connected video display, e.g. a plasma screen. To this end the video signal from any connected audio-visual source (e.g. a DVD player) may be looped through the dSP en route to the display screen where the dSP status and command information is then also overlayed on the programme video. If the process delay of the signal processing operations from end to end of the dSP are sufficiently long, (e.g. when the length of the compensation filter running on the first two DSPs which depends on the transducer linearity and the equalisation required, is long) then to avoid lip-sync problems, an optional video frame store can be incorporated in the loop-through video path, to re-synchronise the displayed video with the output sound.
The dSP is a system wherein overlay graphics for on-screen display of setup, status and control information, are produced on any suitably connected video display. The dSP is a system wherein a video frame store is incorporated in the video loop through channel to restore sound and video synchronisation.