WO1994016538A1 - Appareil de manipulation de l'image sonore et procede pour ameliorer cette image sonore - Google Patents

Appareil de manipulation de l'image sonore et procede pour ameliorer cette image sonore Download PDF

Info

Publication number
WO1994016538A1
WO1994016538A1 PCT/US1993/012688 US9312688W WO9416538A1 WO 1994016538 A1 WO1994016538 A1 WO 1994016538A1 US 9312688 W US9312688 W US 9312688W WO 9416538 A1 WO9416538 A1 WO 9416538A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
output
audio
input
signals
Prior art date
Application number
PCT/US1993/012688
Other languages
English (en)
Inventor
Stephen M. Desper
Original Assignee
Desper Products, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US1992/011335 external-priority patent/WO1994016537A1/fr
Application filed by Desper Products, Inc. filed Critical Desper Products, Inc.
Priority to KR1019950702676A priority Critical patent/KR960700620A/ko
Priority to AT94907123T priority patent/ATE183050T1/de
Priority to AU60811/94A priority patent/AU6081194A/en
Priority to DE69325922T priority patent/DE69325922D1/de
Priority to JP6516471A priority patent/JPH08509104A/ja
Priority to EP94907123A priority patent/EP0677235B1/fr
Publication of WO1994016538A1 publication Critical patent/WO1994016538A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • H04S5/02Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation  of the pseudo four-channel type, e.g. in which rear channel signals are derived from two-channel stereo signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other

Definitions

  • This invention is directed to an automatic sound image enhancement method and apparatus wherein the electronic signal which corresponds to the audio signal is electronically treated by amplitude and phase control to produce a perception of enhancements to music and sounds.
  • the invention preferably operates on stereophonically recorded music (post production enhancement) or in recording and mixing stereophonic recordings (production).
  • the invention may also be used in connection with enhancing monophonic or monaural sound sources to synthesize a stereo-like effect or to locate such sources to positions beyond those normally found in the streo sound stage.
  • Sound is vibration in an elastic medium, and acoustic energy is the additional energy in the medium produced by the sound. Sound in the medium is propagated by compression and refraction of the energy in the medium. The medium oscillates, but the sound travels. A single cycle is a complete single excursion of the medium, and the frequency is the number of cycles per unit time. Wavelength is the distance between wave peaks, and the amplitude of motion (related to energy) is the oscillatory displacement . In fluids, the unobstructed wave front spherically expands.
  • Hearing is the principal response of a human subject to sound.
  • the ear, its mechanism and nerves receive and transmit the hearing impulse to the brain which receives it, compares it to memory, analyzes it, and translates the impulse into a concept which evokes a mental response.
  • the final step in the process is called listening and takes place in the brain; the ear is only a receiver.
  • hearing is objective and listening is subjective.
  • the method and apparatus of this invention is for the automatic stereophonic image enhancement for human listening, the listening process is in perceptions of hearing.
  • This patent describes the perceptions of human subjects. Because a subject has two ears, laterally spaced from each other, the sound at each eardrum is nearly always different.
  • Each ear sends a different signal to the brain, and the brain analyzes and compares both of the signals and extracts information from them, including information in determining the apparent position and size of the source, and acoustic space surrounding the listener.
  • the first sound heard from a source is the direct sound which comes by line-of-sight from the source.
  • the direct sound arrives unchanged and uncluttered, and lasts only as long as the source emits it.
  • the direct sound is received at the ear with a frequency response (tonal quality) which is relatively true to the sound produced by the source because it is subject only to losses in the fluid medium (air).
  • the important transient characteristics such as timbre, especially in the higher registers, are conveyed by direct sound.
  • the integral differences at each eardrum are found in time, amplitude and spectral differences.
  • the physical spacing of the ears causes one ear to hear after the other, except for sound originating from a source on the median plane between the ears.
  • the time delayed difference is a function of the direction from which the sound arrives, and the delay is up to about 0.8 millisecond.
  • the 0.8 millisecond time delay is about equal to the period of 1 cycle at 1,110 Hz.
  • the acoustic wavelength of arriving sounds becomes smaller than the ear-to-ear spacing, and the interaural time difference decreases in significance so that it is useful only below about 1,400 Hz to locate the direction of the sound.
  • the difference in amplitude between the sound arriving at the two ears results principally from the detracting and shadowing effect of the head and external ear pinna. These effects are greater above 400 Hz and become the source of information the brain interprets to determine the direction of the source for higher frequencies.
  • the antiphasic image does not manifest itself as a point source, but is diffused and forms the rear boundary of the listener's conceptual image space.
  • virtual images can be generated along an arc or semicircle from the back of the observer's head toward the left or right speakers.
  • the "precedence effect” Another factor which influences the perception of sound is the "precedence effect" wherein the first sound to be heard takes command of the ear-brain mechanism, and sound arriving up to 50 milliseconds later seems to arrive as part of and from the same direction as the original sound.
  • the apparent direction of the source By delaying the signal sent to one speaker, as compared to the other, the apparent direction of the source can be changed. As part of the precedence effect, the apparent source direction is operative through signal delay for up to 30 milliseconds. The effect is dependent upon the transient characteristics of the signal.
  • Temporal fusion An intrinsic part of the precedence effect, yet an identifiably separate phenomenon, is known as "temporal fusion" which fuses together the direct and delayed sounds.
  • the ear-brain mechanism blends together two or more very similar sounds arriving at nearly the same time. After the first sound is heard, the brain suppresses similar sounds arriving within about the next 30 milliseconds. It is this phenomenon which keeps the direct sound and room reverberation all together as one pleasing and natural perception of live listening. Since the directional hearing mechanism works on the direct sound, the source of that sound can be localized even though it is closely followed by multiple waves coming from different directions.
  • the walls of the room are reflection surfaces from which the direct sound reflects to form complex reflections.
  • the first reflection to reach the listener is known as a first order reflection; the second, as second order, etc.
  • An acoustic image is formed which can be considered as coming from a virtual source situated on the continuation of a line linking the listener with the point of reflection. This is true of all reflection orders. If we generate signals which produce virtual images, boundaries are perceived by the listener. This is a phenomenon of conditioned memory. The position of the boundary image can be expanded by amplitude and phase changes within the signal generating the virtual images. The apparent boundary images broaden the perceived space.
  • Audio information affecting the capability of the ear-brain mechanism to judge location, size, range, scale, reverberation, spatial identity, spatial impression and ambience can be extracted from the difference between the left and right source. Modification of this information through frequency shaping and linear delay is necessary to produce the perception of phantom image boundaries when this information is mixed back with the original stereo signal at the antiphasic image position.
  • the common practice of the recording industry, for producing a stereo signal, is to use two or more microphones near the sound source. These microphones, no matter how many are used, are always electrically polarized in-phase.
  • the apparatus described herein When the program source is produced under these conditions (which are industry standard), the apparatus described herein generates a "synthetic" conditioning signal for establishment of a third point with its own time domain. This derivation is called synthetic because there is a separation, alternation and regrouping to form the new whole.
  • a third microphone may be used to define the location of the third point in relation to the stereo pair. Contrary to the normal procedure of adding the output of a third microphone to the left and right side of the stereo microphone pair, the third microphone is added to the left stereo pair and subtracted from the right stereo pair.
  • This arrangement provides a two-channel stereo signal which is composed of a left signal, a right signal, and a recoverable signal which has its source at a related but separate position in the acoustic space being recorded.
  • organic derivation This is called organic derivation and it compares to the synthetic situation discussed above, where the ratios are proportional to the left minus the right (from which it was derived) but is based on its own time reference, which is, as will be seen, related to the spacing between the three microphones.
  • the timing between the organic conditioning signal is contingent upon the position of the original sound source with respect to the three microphones. The information derived more closely approximates the natural model than that of the synthetically derived conditioning signal.
  • All sources of sound recorded with two or more microphones in synthetic or organic situations contain the original directional cues.
  • a portion of the original directional cues are isolated, modified, reconstituted and added, in the form of a conditioning signal, to the original forming a new whole.
  • the new whole is in part original and in part synthetic.
  • the control of the original-to-synthetic ratio is under the direction of the operator via two operating modes:
  • Insertion of the conditioning signal at the antiphasic image position produces enhancement to and generation of increased spatial density in the stereo mode but is completely lost in the mono mode where the directional information will be unused.
  • Information which can be lost in the mono mode without upsetting the inner-instrument musical balance includes clues relating to size, location, range and ambience but not original source information.
  • directional information is obtained exclusively from the very source which is lost in the monophonic mode, namely, left signal minus right signal.
  • left signal minus right signal Whether in the synthetic or organic model derivation of a conditioning signal, subtracting the left signal from the right signal and reinserting it at the antiphasic position will not challenge mono/stereo compatibility, providing that the level of conditioning signal does not cause the total RMS difference energy to exceed the total RMS summation energy at the output.
  • a conditioning signal is provided and introduced into electronic signals which are to be reproduced through two spaced loudspeakers so that the perceived sound frame between the two loudspeakers is an open field which at least extends toward the listener from the plane between the loudspeakers and may include the perception of boundaries which originate to the side of the listener.
  • the conditioning signal may be organic, if the original sound source is approximately miked, or it may be derived from the left and right channel stereo signals.
  • the present invention provides an automatic stereophonic image enhancement system and apparatus wherein two channel stereophonic sound is reproduced with signals therein which generate a third image point with which boundary image planes can be perceived within the listening experience resulting in an extended conceptual image space for the listener.
  • the present invention provides a stereophonic image enhancement system which includes automatic apparatus for introducing the desired density of conditioning signal regardless of program content into the electronic signal which will be reproduced through the two spaced speakers.
  • Figure 1 is a perspective view of listener facing two spaced loudspeakers, and showing the outline of an enclosure.
  • Figure 2 is a schematic plan view of the perception of a sound frame which includes a synthetic conditioning signal which is included in the signals to the speakers.
  • Figure 3 is a schematic plan view of the perceived open field sound frame where an organic conditioning signal is introduced into the signal supplied to the speakers.
  • Figure 4 is a schematic plan view of the open field sound frame, as perceived from the listener's point of view, as affected by various changes within the conditioning signal.
  • Figure 5 is a schematic plan view of a sound source and microphone placements which will organically produce a conditioning signal.
  • Figure 6 is a simplified schematic diagram of a circuit which combines the organically derived conditioning signal with the left and right channel signals.
  • Figures 7(a) and 7(b) form a schematic electrical diagram of the automatic stereophonic image enhancement system and apparatus in accordance with this invention.
  • Figure 8 is a schematic electrical diagram of an alternate circuit therefore.
  • Figure 9 is a front view of the control panel for the apparatus of Figure 8.
  • Figures 10(a) and 10(b) form a digital logic diagram of a digital embodiment of the invention.
  • Figure 11 is a front view of a joystick box, a control box, and a interconnecting data cable 420 which can be used to house the embodiment of the invention described with reference to Figures 12(a), 12(b), 13(a)-13(f), 14(a) and 14(b).
  • Figures 12(a) through 12(d) form a schematic diagram of an embodiment of the invention wherein joysticks may be used to move a sound around in a perceived sound field.
  • Figures 13(a)-13(f) are graphical representations of the control outputs which are generated by the joysticks and associated circuitry and applied to voltage controlled amplifiers of Figures 12(a)-12(d).
  • Figures 14(a) and 14(b) form a digital sound processor logic diagram similar to that of Figures 10(a) and 10(b), but adapted for use as the digital sound processor 450 in Figures 12(a)-12(d)
  • Figure 15 is a schematic diagram of an embodiment of the invention which is adapted for use in consumer quality audio electronics apparatus, of the type which may be used in the home, etc.
  • Figure 16 is a block diagram of an embodiment of the invention adapted for use in consumer-quality audio electronic apparatus, which embodiment includes an automatic control circuit for controlling the amount of spatial enhancement which the circuit generates.
  • Figures 17A and 17B may be joined to form a schematic diagram corresponding to the block diagram of Figure 16.
  • Figure 18 is a block diagram of another embodiment of the invention, which block diagram includes an integrated circuit implementing the circuitry of Figure 15 or of Figures 16, 17a and 17b.
  • Figure 19 is a block diagram of another embodiment, similar to the embodiment of Figure 18, but providing for multiple inputs.
  • Tables A through F set forth the data which is graphically presented in Figures 13(a)-(f), respectively.
  • Tables G through X set forth additional data.
  • Figure 1 illustrates the usual physical arrangement of loudspeakers for monitoring of sound. It should be understood that in the recording industry sound is “monitored” during all stages of production. It is “reproduced” when production is completed and the product is in the market place. At that point and on, what is being reproduced is the production. Several embodiments of the invention are disclosed. Some embodiments are intended for use during sound production, while one embodiment is intended for use during sound reproduction, in the house, for example.
  • Embodiments of the invention include the system and apparatus illustrated in a first embodiment in Figures 5 and 6, a second embodiment 10 in Figure 7, a third embodiment 202 in Figure 8, a fourth embodiment of Figures 10(a) and 10(b), a fifth and presently preferred embodiment (for professional studio use) in Figures 11, 12(a), 12(b), 13(a)-13(f), 14(a) and 14(b).
  • These embodiments may be employed in record, compact disc, mini-disc, cassette, motion picture, video and broadcast production, to enhance the perception of sound by human subjects, i.e. listeners.
  • Another and sixth embodiment, which is disclosed with reference to Figure 15 may be used in a consumer quality stereo sound apparatus found in a home environment, for example.
  • the two loudspeakers 12 and 14 are of suitable quality with enclosures to produce the desired fidelity. They are laterally spaced, and the listener 16 faces them and is positioned substantially upon a normal plane which bisects the line between the speakers 12 and 14. Usually, the listener is enclosed in a room, shown in phantom lines, with the loudspeakers. During reproduction, the two loudspeakers may be of any quality. The loudspeaker and listener location is relatively unimportant. During monitoring, the effect is one of many separate parts being blended together. Hence, monitoring requires a standard listening position for evaluating consistency, whereas during reproduction, the effect has become one with the whole sound and can be perceived from any general location.
  • the loudspeakers 12 and 14 should be considered monitors being fed from an electronic system which includes the sound production enhancement apparatus of this invention.
  • the electronic system may be a professional recording console, multi-track or two-track analogue, or digital recording device, with a stereophonic two-channel output designated for recording or broadcasting.
  • the sound source material may be a live performance or it may be recorded material in a combination of the foregoing.
  • Figure 2 illustrates the speakers 12 and 14 as being enclosed in what is perceived as a closed field sound frame 24 (without the lower curved lines 17 and 26) which is conventional for ordinary stereophonic production.
  • the apparent source can be located anywhere within the sound frame 24, that is, between the speakers.
  • a synthetic conditioning signal is reinserted at the antiphasic image position 34, amplitude and time ratios 17 are manifested between the three points 12, 14 and 34. Because the antiphasic point 34 is the interdependent product of the left point 12 and the right point 14, the natural model is approached by a synthetic construction, but never fully realized. The result is open field sound from 26. Listener 16 perceives the open field 26.
  • Figure 3 illustrates open field sound frame 28 which is perceived by listener 16 when a conditioning signal derived, as in Figure 2, is supplied and introduced as part of the signal to speakers 12 and 14, but has as its source an organic situation.
  • the density of spatial information is represented by the curved lines 17 in Figure 2 and is represented by the curved lines 19 in Figure 3. It is apparent that the density of spatial information is greater in Figure 3 because the three points which produced the original conditioning signal are not electrically interdependent but are acoustically interactive; information more closely reflecting the natural model is supplied to the ear-brain mechanism of listener 16.
  • Figure 4 illustrates the various factors which are sensed by the listener 16 in accordance with the stereophonic image enhancement systems of this invention.
  • the two speakers 12 and 14 produce the closed field sound frame 24 when the speakers are fed with homophasic signals. Homophasic image position 30 is illustrated, and the position can be shifted left and right in the frame 24 by control of the relative amplitude of the speakers 12 and 14.
  • the speakers 12 and 14 produce left and right real images, and a typical hard point image 32 is located on the line between the speakers because it is on a direct line between the real images produced by the two real speakers. As described above, the hard point source image can be shifted between the left and right speakers.
  • the antiphasic image position 34 is produced by speakers 12 and 14 and may be perceived as a source location behind the listener's head 16 at 34 under test or laboratory demonstrations. Under normal apparatus operating conditions, source 34 is not perceived separately but, through temporal fusion, is the means by which an open filed sound frame is perceived. Position 34 is a perceived source, but is not a real source. There is no need for a speaker at position 34. Rather, by controlling the relationship between the antiphasic image position and one or both of the real images all produced by speakers 12 and 14, the image source can be located on a line between one of the real images and the antiphasic image position 34.
  • the point between it and speakers 12 and 14 is considered a soft point source image.
  • Such a soft point source image is shown at point 36.
  • Open field sound frame is thus produced and provides the perception of virtual space boundaries 40, 42, 44 or 46 (not on line), depending on the conditioning signal's phase relationship to the original source.
  • the perceived distance for the virtual space boundaries 40, 42, 44 and 46 from the closest hard point is from 2 to 30 feet (approximately 1-10 meters), depending on the dimension control setting of Figure 5 and the distance between speakers 12 and 14.
  • FIG 18 is a schematic diagram of an Eighth embodiment of the invention, which embodiment modifies the embodiment described in reference to Figure 15.
  • Figure 19 is a schematic diagram of a Ninth embodiment of the invention, which is similar to the Eighth embodiment, but which includes panning pots connected to the inputs of the circuitry such as via a conventional recording counsel.
  • Figure 5 is a schematic diagram of a sound source which is to be recorded or amplified.
  • Three microphones L, R and C are shown located in front of the source.
  • the L (left) and R (right) microphones are approximately equally spaced from the source on its left and right sides.
  • the C (conditioning) microphone is located further spaced from the source and approximately equally spaced from the L and R microphones.
  • the signal from the C microphone is adjusted in gain and then is added (at adder A, for example) and subtracted (at subtractor S, for example) from the stereo signals L, R as shown in Figure 6.
  • the gain of conditioning signal, C By adjusting the gain of conditioning signal, C, the amount of expansion which occurs can be controlled easily.
  • the conditioning signal, C is produced organically, that is, by a microphone array pickup as shown in Figure 5 and connected as shown in Figure 6.
  • the conditioning signal can be created synthetically, and introduced into the left and right channel signals, when (1) the sound source is mixed-down from a prerecorded tape in a recording studio, for example, (2) the sound is broadcast, or (3) when prerecorded sound is received or reproduced in a home environment.
  • the conditioning signal is delayed time-wise and filtered compared to the signals from microphones L and R due to the placement of microphone C.
  • the left input lines 48 and 49 and right input lines 50 and 51 are received from musical signal sources.
  • the system and apparatus 10 is described in this embodiment as being a system which introduces the conditioning signal before the two-channel recording and, thus, is a professional audio laboratory and apparatus.
  • the left and right inputs 48, 49, 50 and 51 may be the product of a live source or a mixdown from a multiple channel tape produced by the live recording, or it may be a computer generated source, or a mixture of same.
  • the inputs of the apparatus 48, 49, 50 and 51 addresses the output of the recording console's "quad bus" or "4-track bus".
  • Each position on the recording console can supply each and every bus of the quad bus with a variable or "panned" signal representing a particular position.
  • Two channels 49, 51 of the quad bus are meant for use as stereo or the front half of quadraphonic sound; the other two channels, 48, 50, are for the rear half of quadraphonic sound.
  • each position or input of a modern recording console has a panning control to place the sound of the input between left, right, front or back via the quad bus.
  • a recording console may have any number of inputs or positions which are combined into the quad bus as four separate outputs.
  • Alternate insertion of the apparatus of Figure 7 is possible in the absence of a quad bus by using the stereo bus plus two effect buses.
  • Left front input 49 (unprocessed) is connected to amplifier 52.
  • Left rear input 48 (to processed) is connected to amplifier 54.
  • Right rear input 50 (to processed) is connected to amplifier 56.
  • Right front input 51 (unprocessed) is connected to amplifier 58.
  • the outputs of amplifiers 52 and 58 are respectively connected to adders 60 and 62, respectively, so that amplifiers 52 and 58 effectively bypass the enhancement system 100.
  • the use of the quad bus allows the apparatus to address its function to each input of a live session or each track of recorded multi-track information, separately. This means that, in production, the operator/engineer can determine the space density of each track rather than settling for an overall space density. This additional degree of creative latitude is unique to this apparatus and sets it apart as a production tool.
  • the amplified left and right signals in lines 68 and 70 are both connected to summing amplifier 72 and differencing amplifier 74.
  • the output in line 76 is, thus, L+R, but the amplifier 72 also serves to invert the output so that it appears as -(L-R).
  • These sum and difference signals in lines 76 and 78 are added together in adder 60 and generate the left program with a conditioning signal C L which adds additional spatial effects to the left channel.
  • the signal in line 78 also goes through invertor 80 to produce in line 82 the (L-R) signal. Lines 76 and 82 are introduced into adder 62 to generate in its output line 84 the right program with conditioning signal C R .
  • the output lines 79 and 84 from adders 60 and 62 go to the balanced-output amplifiers 86 and 88 for the left output and 90 and 92 for the right output.
  • the output amplifiers are preferably differential amplifiers operating as a left pair and a right pair, with one of each pair operating in inverse polarity with the other half of each pair for balanced line output.
  • conditioning signals C L and C R are similar to conditioning signal C of Figure 6, but are synthetically produced. Also, they have somewhat different frequency filtering which tends to broaden the rear sound images, particularly the antiphasic position 34 ( Figure 4).
  • Conditioning signals C L and C R derived from the difference signal -(L-R) in line 78 at the output of differencing amplifier 74.
  • the difference signal in line 78 passes through high pass filter 94 which has a slope of about 18 decibels per octave and a cutoff frequency of about 300 Hz to prevent comb filtering effects at lower frequencies.
  • the filtered signal preferably, but not necessarily, passes through delay 96 with an adjustable and selectable delay as manually input from manual control 98, which is called "the Dimension Control".
  • the output of the delay 96 goes to voltage controlled amplifier (VCA) 102 which provides level control.
  • VCA voltage controlled amplifier
  • the DC control voltage in line 104 which controls voltage control amplifier 102, is supplied by potentiometer 106, in the Manual Mode, or by the hereinafter described control circuit in the Automatic Mode.
  • Potentiometer 106 provides a DC voltage divided down from a DC source 107. It functions as a "Space Control" and it effectively controls the amount of expansion of the sound perceived by a listener, i.e., it controls the amount of the conditioning signal which is added and subtracted from the left and right channel signals.
  • the output from voltage controlled amplifier 102 in line 108 is preferably connected via left equalizer 110 and right equalizer 112 for proper equalization and phasing for the individual left and right channels, which tends to broaden the rear image.
  • the illustrated equalizers 110 and 112 are of the resonant type (although they could be any type) with a mid-band boost of 2 db at a left channel center frequency in equalizer 110 of about 1.5 kilohertz and a right channel frequency in equalizer 112 of about 3 kilohertz.
  • the left conditioning signal -C L occurs in line 114 and the right conditioning signal -C R occurs in line 116.
  • the left conditioning signal -C L is added in adder 60.
  • the right conditioning signal in line 116 is connected to invertor 80 where the conditioning signal -C R is added to the difference signal -(L-R) and the sum is added to the sum signal to result in the right signal minus right conditioning signal on line 84 and left signal plus left conditioning signal on line 79.
  • the automatic control circuit generally indicated at 118 monitors the output signal in line 79 and 84 and regulates the amount of conditioning signal to keep a Lissajous figure generated on an X-Y oscilloscope, connected to the outputs, relatively constant.
  • the Lissajous figure is a figure displayed on the CRT of an oscilloscope when the two outputs are connected to the sweep and amplitude drives of the oscilloscope. When the Lissajous figure is fairly round, the energy ratio between the sum and difference of the two outputs is substantially equal (a desirable characteristic).
  • Lines 84 and 79 are respectively connected to the inputs of differencing amplifier 120 and adding amplifier 122.
  • the outputs are respectively rectified, and rectifiers 124 and 126 provide signals in line 128 and 130.
  • the signal in lines 128 and 130 are, thus, the full wave rectified sum and difference signals of the apparatus output respectively out of subtractor 120 and adder 122.
  • Lines 128 and 130 are connected to filter 132 and 134 which have adjustable rise and fall ballistics.
  • Selector switch 136 selects between the manual and automatic control of the control voltage in line 104 to voltage controlled amplifier 102.
  • the manual position of selector switch 136 is shown in Figure 7(a), and the use of the space expansion control potentiometer 106 has been previously described.
  • the space control switch is switched to the other, automatic position, the outputs of filters 132 and 134 in lines 138 and 140, respectively, are processed and are employed to control voltage control amplifier 102.
  • error amplifier 142 When space control selector switch 136 is in the automatic position, the output of error amplifier 142 is connected through gate 144 to control the voltage in line 104.
  • the error amplifier 142 has inputs directly from line 138 and from 140 through switch segment 146 and back through line 148.
  • the filtered sum signal in line 140 is connected through the space expansion potentiometer 106 so that it can be used to reduce the apparent level of the output sum information to error amplifier 142 to force the error amplifier 142 to reduce the sum/difference ratio.
  • Comparator 150 is connected to receive the filtered sum and difference information in lines 138 and 140. Comparator 150 provides an output into gate line 152 when space control selector switch 136 is in the automatic mode and when a monophonic signal is present at inputs 48 and 50. This occurs, for example, when an announcer speaks between music material. When comparator 150 senses monophonic material, gate line 152 turns off gate 144 to shut down voltage controlled amplifier 102 to stop the conditioning signal. This is done to avoid excessive increase in stereo noise, from random phase and amplitude changes, while the input program material is fully balanced. The automatic control circuit 118 cannot distinguish between unwanted noise and desired program material containing difference information.
  • a threshold ratio is established between the sum and difference information in lines 138 and 140 by control of the input potentiometer into comparator 150.
  • the comparator 150 and gate 144 thus avoid the addition of false space information in a conditioning signal which, in reality, would be response to difference-noise in the two channels.
  • the comparator 150 thus requires a specific threshold ratio between the sum and difference information, under which the gate 144 is turned off and over which the gate 144 is turned on.
  • Lines 154 and 156 which are the inputs of amplifiers 52 and 58, are connected, along with lines 68, 70, 79 and 84, each through their own diode to bus 158.
  • Bus 158 is connected through a resistance to input 160 of comparator 162.
  • a negative constant voltage source is connected through another resistor to the input 160, and the comparator 162 is also connected to ground.
  • output signal 164 such as a signal light
  • Bus 158 is similarly connected through a resistor to the input 166 of comparator 168.
  • the negative voltage source is connected through another resistor to input 166, and the resistance values are adjusted so that comparator 168 has an input when clipping is taking place.
  • Latching circuit 170 is actuated when clipping has taken place to illuminate the two signal lights 172 and 174. Those lights stay illuminated until reset 176 is actuated.
  • Comparator 184 gives an output pulse whenever the difference peak envelope becomes greater than the sum peak envelope, within plus or minus 3 dB.
  • the level controls at the outputs of the peak followers 180 and 182 allow an adjustment in the plus or minus 6 dB difference for different applications.
  • Comparator 186 has an output when sum/difference peak ratio approaches the trigger point of comparator amplifier 184 within about 2 dB, and lights signal light 188 on the front panel, illustrated in Figure 7(b), as a visual warning of approaching L-R overload. This is accomplished by reducing the apparent level of the sum envelope by about 2 dB with the potentiometer connecting comparator 186 to ground.
  • the output of comparator amplifier 184 feeds a latching circuit 190 which activates light 195 and which holds until reset by switch 192.
  • driving circuit 194 When the latching circuit is active, it activates driving circuit 194 which lights panel lights 196 and 197 and, after a time delay, rings audible alarm 198. At the same time, driving circuit 194 energizes line 199 which cuts off gate 144 to withhold the signal to amplifier 102 which controls the conditioning signal. Actuation of gate 144 removes the conditioning signal from line 108, but permits the normal stereo signal to continue through the circuit.
  • FIG. 8 A third embodiment of the system and apparatus of this invention is shown in Figure 8 and is generally indicated at 200.
  • the left front quad bus channel address unprocessed input 49 which is connected to amplifier 204; the left rear quad bus channel address processed input 48 which is connected to amplifier 206; the right rear quad bus channel address processed input 50 which is connected to amplifier 212; and, the right front quad bus channel address unprocessed input 51 which is connected to amplifier 214.
  • Amplifiers 204, 206, 212 and 214 are inverting and provide signals in lines 208, 210, 216 and 218, respectively. Both lines 208 and 210 are connected to summing amplifier 220, while both lines 216 and 218 are connected to summing amplifier 222. Lines 210 and 216 carry -L and -R signals.
  • the conditioning signals C R and -C L are derived by connecting differencing amplifier 224 to both lines 210 and 216.
  • the resulting difference signal, -(R-L) is filtered in high pass filter 226, similar to filter 94 in Figure 7(a), and the result is subject to selected delay in delay circuit 228.
  • the delay time is controlled from the front panel, as will be described with respect to Figure 9.
  • the output from delay 228 goes through voltage controlled amplifier 230 which has an output signal, -C, in line 232, which is supplied to both non-inverting equalizer 234 and inverting equalizer 236.
  • Those equalizers respectively have conditioning signal outputs -C L and +C R which are connected to the inverting summing amplifiers 220 and 222.
  • the left conditioning signal -C L is added (and inverted) with the original left signal at amplifier 220 to form L+C L
  • the right conditioning signal +C R is effectively subtracted from the original right signal at invertor amplifier 222 to form R-C R .
  • the outputs from amplifiers 220 and 222, in lines 238 and 240, respectively, are preferably and respectively connected to balanced left amplifiers 242 and 244 and balanced right amplifiers 246 and 248, in the manner described with respect to amplifiers 86 through 92 of Figure 7(b). It may be useful to connect the various points in the circuit of Figure 8 to the clipping and L-R overload warning circuits 153 and 178 in the same manner as previously described with reference to Figure 7(b).
  • VCA 230 may be manually controlled by a potentiometer and DC supply combination, such as potentiometer 106 and supply 107.
  • a potentiometer and DC supply combination such as potentiometer 106 and supply 107.
  • the difference between the two embodiments of the system in Figures 7(a), 7(b) and 8 lies in the way the original left and right signals are routed.
  • the left and right signals are added and subtracted. This sum and difference information is then re-added and re-subtracted to reconstruct the original left and right signals.
  • the original left and right signals are not mixed together. They remain independent of each other from input to output.
  • the enhancement system may be automatic with self-controlling features in the apparatus so that the stereophonic image enhancement can be achieved without continual adjustment of the system and apparatus.
  • manual control may be used, if desired.
  • Figures 10(a) and 10(b) they form a digital logic diagram of a digital embodiment of the invention which is conceptually somewhat similar to the analog, or mostly analog, embodiment of Figures 7(a) and (b).
  • data transmission lines are shown in solid lines, while control lines are shown in dashed lines.
  • Left and right audio channel information is supplied in multiplexed digital format an input 302.
  • Clock information is also supplied at an input 304 to a formatter 306 which separates the left channel information from the right channel information.
  • formatter 306 de-multiplexes the digital data which can be supplied in different multiplexed synch schemes. For example, a first scheme might assume that the data is being transmitted via a Crystal Semiconductor model CS8402 chip for AES-EBU, S-PDIF inputs, or a second scheme might assume that the digital data comes from an analog to digital converter such as a Crystal Semiconductor model CS5328 chip.
  • the I/O mode input 305 preferably advises the formatter 306 at the front end and the formatter 370 at the rear end of the type of de-multiplexing and multiplexing schemes required for the chips upstream and downstream of the circuitry shown in Figures 10(a) and (b).
  • the I/O mode input 305 preferably advises the formatter 306 at the front end and the formatter 370 at the rear end of the type of de-multiplexing and multiplexing schemes required for the chips upstream and downstream of the circuitry shown in Figures 10(a) and (b).
  • Those skilled in the art will appreciate that other multiplexing and de-multiplexing schemes can be used or that the left and right channel data could be transmitted in parallel, i.e., non-multiplexed data paths.
  • the left channel digital audio data appears on line 308 while the right channel digital audio data appears on line 309. This data is subtracted from each other at a subtractor 324 to form R-L data.
  • the R-L data is supplied to a switch 329 and may be filtered though a high-pass filter 326, and a low pass filter 327 and is subjected to digital time delay at device 328.
  • the signal is filtered by filter 310 having a narrow band pass preferably centered at 500 Hz with 6 dB/octave slopes on either side of its center frequency.
  • filter 326, 327 and 310 are represented as they might be, i.e., as separate filters, in an analog embodiment.
  • filters 326, 327 and 310 are preferably implemented in a digital signal processor and they, along with delay 328, be preformed in different sequences and the functions can be combined as a matter of design choice. If desired, filters 326, 327 may be eliminated.
  • Switch 329 is controlled by a C-mode control 303 which effectively controls the position of switch 329, which is shown in Figures 10(a) and (b), in its C-mode position, that is, where the filters 326, 327 and 310 and the time delay 328 are bypassed.
  • the C-mode is preferably used when the apparatus is used with live sources, such as might be encountered during a concert or a theatrical performance, and a C microphone input source ( Figures 5 and 6) is available, so that the C signal then need not be synthetically produced.
  • the R-L data is preferably subjected to the filtering and time delay to generate the conditioning signal C when the invention is used to mixdown a recorded performance from a multi-track tape deck, for example.
  • variable gain digital circuit 330 which is functionally similar to the voltage controlled amplifier 102 shown in Figure 7(b).
  • a mute control input can be used to reduce the gain at gain control 330 very quickly, if desired.
  • the output of variable gain digital circuit 330 is applied to an adder 320 and to a subtractor 332 so that the control signal C is added and subtracted from the left audio data and right audio data on lines 379 and 384, respectively. That data is then multiplexed at formatter 370 and output in digital form at serial output 390.
  • variable gain circuitry 330 which can be implemented rather easily in the digital domain by shifting bits, for example, is controlled either from a manual source or an automatic source, much like the voltage controlled amplifier 102 of Figure 7(b).
  • the gain through circuitry 330 is controlled by a "space control" input 362 which is conceptually similar to the space control potentiometer 106 shown in Figure 7(a) and the potentiometer shown Figure 6.
  • the gain in circuitry 330 is automatically controlled in a manner similar to that of Figures 7(a) and (b).
  • the data on lines 379 and 384 are summed at a summer 342 and, at the same time, subtracted at subtractor 340.
  • the outputs are respectively applied to high-pass filters 346 and 344, whose outputs are in turn applied to root mean square (RMS) detectors 350 and 348, respectively.
  • Detector 348 outputs a log difference signal, while detector 350 outputs a log sum signal.
  • the value of the log difference signal from detector 348 can be controlled from the "Space In" input 362 at adder 352, in the automatic mode, so that the "Space In” value offsets the output of the log difference detector, either:
  • the output of adder 352 and the log sum output from detector 350 are applied to a comparator 354, which is conceptually similar to the comparator 150 of Figure 7(a).
  • the output of comparator 354 is applied to a rate limiter 356 which preferably limits the rate at which the output from comparator 354 limits the rate of gain change of circuit 330 to approximately 8 dB per second.
  • circuitry shown in Figures 10(a) and (b), instead of implementing it in discrete digital circuitry, preferably can be implemented by programming a digital signal processor chip, such as the model DSP 56001 chip manufactured by Motorolla, by known means.
  • the automatic control circuitry 378 is also shown in Figures 10(a) and (b). When switch 367 is in its automatic position, the automatic control circuitry 378 effectively controls the amount of spatial effect added by the invention depending upon the amount of spatial material initially in the left and right audio. That is to say, if the left and right audio data being input into the circuitry has high spatial impressions already, the amount of spatial effect added by the present invention is less than if the incoming material has less spatial impression information in it originally.
  • the control circuitry 378 also helps to keep the envelope of the L-R signal less than the envelope of the L+R signal. That can be important for FM and television broadcasting where governmental agencies, such as the FCC in the United States, often prefer that the broadcast L-R signal be no greater than the L+R signal.
  • the embodiment of the invention disclosed with respect to Figures 10(a) and (b), is particularly useful in connection with the broadcast industry where the spatial effects added by the circuitry can be automatically controlled without the need for constant manual input.
  • the present invention is completely mono-compatible, that is to say, when the present invention is used to enhance the spatial effects in either a radio FM broadcast or a Television sound FM broadcast, those receivers which are not equipped with stereo decoding circuitry, do not produce any undesirable effects in their reproduction of the L+R signal due the spatial effects which are added by the present invention to the L-R signal being broadcast.
  • the R/L equalization on line 312 controls the amount of boost provided by filter. That boost is currently set in the range of 0 to +8 dB and more preferably at +4 dB.
  • the center frequency of filter 310 is preferably preferably set at 500 Hz, but it has been determined filter 310 may have center frequencies in the range of 300 Hz to 3 kHz.
  • the WARP In input to time delay 328 adjusts the time delay.
  • the time delay is preferably set at zero delay for audio reproduction, 1.0 msec for broadcasting applications, 4-6 mSec for mechanical record cutting, and up to 8 mSec for cinematic production applications.
  • the manual mode of operation of the present invention will be very important for the recording industry and for the production of theater, concerts and the like, that is, in those applications in which large multichannel sound mixing panels are currently used.
  • Such audio equipment usually has a reasonable number of audio inputs, or audio channels, each of which are essentially mono.
  • the sound recording engineer has control of not only the levels of each one of the channels but also, in the prior art, uses a pan control to control how much of the mono signal coming into the sound board goes into the left channel and how much goes into the right channel. Additionally, the engineer can control the amount of the signal going to the rear left and rear right channels on a quad bus audio board.
  • the pan control of the prior art permits a sound source point image 32 to be located anywhere on the line between the left and right speakers 12 and 14 depending on the position of the pan control.
  • stereo recording was a large improvement over the mono recordings of forty years ago.
  • the present invention provides audio engineers with such capabilities.
  • the audio engineer will be provided with a joystick by which he or she will be able to move the sound image both left and right and front and back at the same time.
  • the joystick can be kept in a given position during the course of an audio recording session, a theatrical or concert production, or alternatively, the position of the joystick can be changed during such recording sessions or performances. That is to say, the image position of the sound can be moved with respect to a listener 16 to the left and right and forward and back, as desired.
  • the effective position of the joystick can be controlled by a MIDI interface.
  • the present invention will likely be packaged as an add-on device which can be used with conventional audio mixing boards. In the future, however, the present invention will likely find its way into the audio mixing board itself, the joystick controls (discussed above) being substituted for the linear pan control of present technology audio mixing boards.
  • Figure 11 shows the outward configuration of audio components using the present invention which can be used with conventional audio mixing boards known today.
  • the device has twenty-four mono inputs and twenty-four joysticks, one for each input.
  • the equipment comprises a control box 400 and a number of joystick boxes 410 which are coupled to the control box 400 by a data line 420.
  • the joystick box 410 (shown in Figure 11) has eight joysticks associated with it and is arranged so that it can be daisy-chained with other joystick boxes 410, coupling with the control box 400 by data cable 420 in a serial fashion.
  • the joystick box 410 could have all twenty-four joysticks, one for each channel, and, moreover, the number of joysticks and channels can be varied as a matter of design choice. At present it is preferred to package the invention as shown, with eight joysticks in one joystick box 410. In due time, however, it is believed that this invention will work its way into the audio console itself, wherein the joysticks will replace the panning controls presently found on audio consoles.
  • This embodiment of the invention has enhanced processed, left and right outputs 430 and 432 wherein all the inputs have been processed left and right, front and back, according to the position of the respective joysticks 415. These outputs can be seen on control box 400. Unprocessed outputs are also preferably provided in the form of a direct left 434, a direct right 436, a direct front 438 and a direct back 440 output, which are useful in some applications where the mixing panel is used downstream of the control box, and the audio engineer then has the ability to mix processed left and/or right outputs, with unprocessed outputs, when desired.
  • Figures 12(a)-12(d) form a schematic diagram of the invention, particularly as it may be used with respect to the joystick embodiment.
  • Figures 12(a)-(d) twenty-four inputs are shown at numerals 405-1 through 405-24.
  • Each input 405 is coupled to an input control circuit 404, each associated with an input 405. Since, in this embodiment, there are twenty-four inputs 405, there are twenty-four input control circuits 404-1 through 404-24. However, only one of which, namely 404-1, is shown in detail, it being understood that the others, namely 404-2 through 404-24, are preferably of the same design as 404-1.
  • the input control circuitry 404 is responsive to the position of its associated joystick for the purpose of distributing the incoming signal at input 405, on to bus 408.
  • Each joystick provides conventional X and Y dc Voltage signals indicative of the position of the joystick which signal's are converted to digital data, the data being used to address six look-up tables, a look-up table being associated with each of the voltage controlled amplifiers (VCA's) 407 which comprise an input circuit 404.
  • VCA's voltage controlled amplifiers
  • the value in the table for a particular X and Y coordinates of the joystick indicate the gain of its associated VCA 407.
  • the digital output of the look-up table is converted to an analog signal for its associated VCA 407.
  • Each VCA 407 has a gain between unity and zero, depending on the value of the analog control voltage signal.
  • Tables G-X The currently preferred values in the look-up tables are tabulated in Tables G-X.
  • the data in Tables G-X correspond to the action of VCA's 407L, 407F, 407R, 407BL, 407M, and 407BR, assuming that the position of the joystick is resolved to 5 bits in its x-axis and to 5 bits in its y- axis.
  • the position of the joystick can be resolved to one of 32 positions along an x-axis and to one of 32 positions along a y-axis.
  • each Table has 32 by 32 entries, corresponding to the possible position of the joystick.
  • the x and y position information is preferably resolved to greater precision (for example, to 320 x 320) and the data points are interpolated for those x and y coordinate positions between the data points set forth in the Tables.
  • the joysticks can move the sound source to the front, back, to the right or left by moving the joystick to a corresponding position.
  • Tables A-F, and graphically depicted in Figures 13(a)-(f) are conceptually similar but the full left and full right positions are in the left and right quadrants of the joystick.
  • Tables and Figures show the percentage of the signal input at an input 405 which finds its way onto the various lines comprising bus 408, where the various signals on each line of the bus from different input circuits 404 are summed together.
  • the Tables G-X show the percentages for various positions of a joystick 415 as it is moved left and right and front and rear.
  • Table G which is associated with VCA 407L, indicates that VCA 407L outputs 100% of the inputted signal when its associated joystick is moved to the position maximum left and maximum front.
  • the outputted signal from VCA 407L drops to under 20% of the inputted signal when the joystick is moved to its maximum right, maximum back (or rear) position.
  • Other positions of the joystick cause VCA 407L to output the indicated percentage of the inputted signal at 405.
  • VCA 407L receives a control voltage input VC X -L for controlling the amount of the input signal at 405 which finds its way onto bus 408L.
  • VCA 407R controls the amount of input signal at 405 which finds its way onto line 408R.
  • the voltage control amplifiers 407 in the remaining input circuits 404-2 through 404-24, are also coupled to bus 408 in a similar fashion and, thus, the current supplied by the voltage control amplifiers 407 are summed onto that bus structure.
  • the various input signals 405-1 through 405-24 are steered, or mapped, onto the appropriate line of bus 408 depending upon the position of the respective joysticks 415-1 through 415-24.
  • the signals on lines comprising bus 408 are then converted back into voltages by summing amplifiers 409, each of which is identified by subscript letter or letters corresponding to the line of bus with which they are coupled.
  • the outputs of summing amplifiers 409L, 409R, and 409F are applied directly to three of the four direct outputs, 434, 436 and 438, respectively.
  • the direct back output 440 is summed from the output of the summing amplifiers 409CDL, 409CDR, 409EL and 409ER.
  • Tables G-X are the preferred tables when the invention is used for cinema production.
  • each input 405 is controllable from a controller 410. See, for example, Figure 11 where for each joystick 415 there is a mode switch 411 which can be repeatedly pushed to change from mode C, to mode D, to mode EL, to mode ER, and then back to mode C.
  • switches 409L and 409R are in the position shown in Figure 12(c).
  • Switch 409L changes position when in mode EL
  • switch 409R changes position when in mode ER
  • LED's Light emitting diodes
  • LED's 412L and 412R may both be amber while in mode C, may both be green while in mode D, while in mode EL the left LED (412L) would be preferably red while the right LED (412R) would be off, and an opposite convention when in mode ER
  • Mode C is preferably used for live microphone array recording of instruments, groups, ensembles, choirs, sound effects and natural sounds, where a microphone array can be placed at the locations shown in Figure 5.
  • Mode D is a directional mode which places a mono-source to any desired location within the listeners conceptual image space, shown in Figure 4, for example. Applications on mode D are in multi-track mix-down, commercial program production, dialogue and sound effects, and concert sound reinforcement.
  • Mode E expands a stereo source and, therefore, each input is associated with either a left channel (mode EL), or a right channel (mode ER) of the stereo source.
  • This mode can be used to simulate stereo from mono-sources and allows placement within the listener's conceptual image space, as previously discussed. Its applications are the same as for mode D .
  • the output from summing amplifiers 409CDL and 409CDR correspond to the back left and back right signals for the C and D-modes.
  • the signals are applied to a stereo analog-to-digital converter 412CD which multiplexes its output onto line 414CD.
  • stereo analog-to-digital converter 412E takes the E-mode back left and E-mode back right analog data, and converts it to multiplexed digital data on line 414E.
  • the digital data on lines 414CD and 414E are applied to digital sound processors (DSP's) 450 which will be subsequently described with reference to Figures 14(a) and (b).
  • DSP's digital sound processors
  • the audio processors may be identical, and may receive external data for the purpose of determining whether they operate in the C-mode, D-mode or E-mode, as will be described.
  • the programming of the digital sound processor (DSP) 450 can be done at a mask level or it can be programmed in a manner known in the art by a microprocessor attached to a port on DSP 450 which microprocessor then downloads data stored in E proms or ROM's into the DSP 450 during initial power-up of the apparatus.
  • the current preference is to use model 56001 DSP's manufactured by Motorolla.
  • the programming emulates the digital logic shown in Figures 14(a) and (b).
  • the outputs from the DSP 450 chips are again converted back to analog signals by stereo digital to analog converters 418CD and 418E.
  • the outputs of stereo digital to analog converters 418CD and 418E are summed along with outputs from the mono compatibility channel, the front channel 409F, the right channel 409R and the left channel 409L, through summing resistors 419, before being applied to summing amplifiers 425L and 425R and thence to processed stereo outputs 430 and 432.
  • the summing resistors 419 all preferably have the same value.
  • the mono compatibility signal from summing amplifier 409M is applied to a low and high-pass equalization circuit which preferably has a low q typically q on the order of .2 or .3, centered around 1,700 Hz.
  • Equalization circuit 422 typically has a 6 dB loss at 1,700 Hz.
  • processed directional enhancement information i.e., the conditioning signal C
  • This information is band pass filtered by filters 456 and 457, for example, so that it peaks in the mid-range. If the enhanced left and right signals are summed together to form a L+R mono signal, this can show up as a notch in the spectrum in that mid-range area.
  • the mono compatibility signal is preferably used which has a notch which is the antithesis of the mid-range notch and which, in effect, balances the output spectrum of a L+R mono signal.
  • the stereo A to D converters 412 are Crystal Semiconductor model CS5328 chips
  • the stereo D to A converters 418 are Crystal Semiconductor model CS4328 chips and, therefore, formatters 451 and 470 would be set up to de-multiplex and multiplex the left and right digital channel information in a matter appropriate for those chips.
  • the left and right digital data is separated onto buses 452 and 453, and is communicated to, for example, a subtractor 454, to produce a R-L signal.
  • the R-L signal passes through the low pass and high pass filters 456 and 457 and the time delay circuit 458, when the circuit is connected in the E-mode as depicted by switch 455 (which is controlled by an E-mode control signal).
  • switches 455 take the other position shown in schematic diagram and, therefore, the left channel digital data on line 452 is passed through the top set of high pass and low pass filters 456 and 457 and the time delay circuit 458, while the right channel digital data on line 453 is directed through the lower set of high pass and low pass filter 456 and 457 and time delay circuit 458.
  • time delay circuits 458 There is no need to control the amplitude of the signal from the time delay circuits 458, as was done in the embodiment of Figures 10(a) and (b), because of the fact that the amplitude the signals are being controlled at the input control circuits 404 of Figures 12(a) and (c) and the amount of processing is being controlled at the input by the position of joysticks 415 (see Figure 11).
  • the outputs of time delay circuits 458 are applied to respective left and right channel equalization circuits 460.
  • the output of the left equalization circuit 460L is applied via a switch 462 to an input of formatter 470.
  • the output of the right equalization circuit 460R is applied via a switch 462 and an invertor 465 to an input of formatter 470.
  • formatter 470 multiplexes the signals received at its inputs onto serial output line 416.
  • Time delay circuits 458 preferably add a time delay of 0.2 millisecond. It is to be appreciated that the DSPs' 450 and their associated A to D and D to A converters 412E, 412CD, 418E and 418CD have inherent delays of about 0.8 millesecond. Thus, the total delay produced by the inherent delay of the circuit devices and the added delay in time delay circuits 458 total about 1 millisecond compared to the left and right analog signals from amplifiers 409L and 409R.
  • Switches 462 are shown in the C-mode position, which has been previously described. When in the D-mode or the E-mode, the switches 462 change position so as to communicate the outputs of the equalizers 460 to the formatter and invertor 465, as opposed to communicating the unfiltered signals on lines 452 and 453 which is done when in the C-mode.
  • the inversion which occurs in the right channel information by invertor 465 is effectively done by subtractor 332 in the embodiment of Figures 10(a) and (b). It is to be recalled that subtractor 332 subtracts the right channel conditioning information C R (from equalizer 312), from the right channel audio data. In the embodiment of Figures 14(a) and (b), the right channel conditioning signal is inverted by invertor 465.
  • the left channel conditioning signal C L is communicated, without inversion, via formatter 470 and the stereo digital to analog converter 418 (see Figure 12 (d)) onto a summing bus where it is summed through resistors 419, along with the left channel information from summing amp 409L, into an input of summing amp 425L.
  • the invention has been described with respect to both analog and digital implementations, and with respect to several modes of operation.
  • the broadcast mode, mode B uses a feedback loop to control the amount of processing being added to stereo signals.
  • the amount of processing being added is controlled manually.
  • the amount of processing is input controlled by joystick.
  • the conditioning signal which is added and subtracted from the left and right channel data, undergoes little or no processing. Indeed, no processing is required if the conditioning signal is organically produced by the location of microphone "C" in Figure 5.
  • the conditioning signal bypasses the high pass/low pass filters and the time delay circuitry.
  • the conditioning signals are synthesized by the high pass/low pass filter and, preferably, the time delay.
  • the E-mode it is a R-L signal which is subjected to filtering
  • the left and right signals are independently subjected to filtering, for the purpose of generating the conditioning signal C.
  • time delay is controllable. Indeed, some practicing the instant invention may do away with time delay altogether. However, time delay is preferably inserted to de-correlate the signal exiting the filters from the left and right channel information to help ensure monocompatibility. Unfortunately, comb filtering effects can be encountered, but these seem to be subjectively minimized by filters 456, 457, and 460. In order to minimize such effects in the B, D and E-modes of operation, it is preferred to use a time delay circuit such as 328 ( Figures 10(a) and (b)) or 458 ( Figures 14(a) and (b)). In the organic mode ( Figure 6), the time delay is organically present due to the placement of microphone "C" further from the sound source than microphones "L" and "R".
  • the present invention can be used to add spatial effects to sound for both the purposes of recording, broadcasting, or a public performance. If the spatial effects of the invention are used, for example, in audio processing at the time of mixing down a multi-track recording to stereo for the purposes of release of tapes, records or digital discs, when the tape, record or digital disc is played back on conventional stereo equipment, the enhanced spatial effects will be perceived by the listener. Thus, there is no need for additional spatial treatment of the sound after it has been broadcast or after it has been mixed down and recorded for public distribution on tapes, records, digital discs, etc. That is to say, there is no need for the addition of spatial effects at the receiving apparatus or on a home stereo set. The spatial effects will be perceived by the listener whether they are broadcast or whether they are heard from a prerecorded tape, record or digital disc, so long as the present invention was used in the mixdown or in the broadcast process.
  • the present invention while adding spatial expansion to the stereo signals, does not induce artifacts of the process in the L+R signal.
  • Digital delay devices can delay any frequency for any time length.
  • Linear digital delays delay all frequencies by the same duration.
  • Group digital delays can delay different groups of frequencies by different durations.
  • the present invention preferably uses linear digital delay devices because the effect works using those devices and because they are less expensive than are group devices.
  • group devices may be used, if desired.
  • Figure 15 is a schematic diagram of a sixth embodiment of the invention, which embodiment can be relatively easily implemented using a single semiconductor chip and which may be used in consumer quality electronics equipment, including stereo reproduction devices, television receivers, stereo radios and personal computers, for example, to enhance stereophonically recorded or broadcast music and sounds.
  • the circuit 500 has two inputs, 501 and 502 for the left and right audio channels found within a typical consumer quality electronic apparatus.
  • the signals at input 501 are communicated to two operational amplifiers, namely amplifiers 504 and 505.
  • the signals at input 502 are communicated to two operational amplifiers, namely 504 and 506.
  • the left and right channels are subtracted from each other at amplifier 504 which produces an output L-R. That output is communicated to a potentiometer 503 which communicates a portion (depending upon the position of potentiometer 503) of the L-R signal back through a band pass filter 507 formed by a conventional capacitor and resistor network.
  • Filter 507 in addition to band-passing the output from amplifier 504, also adds some frequency dependent time-delay (phase delay) to that signal, which is subsequently applied to an input of amplifier 508.
  • the output of amplifier 508 is the conditioning signal, C, which appears on line 509.
  • the conditioning signal, C is added to the left channel information at amplifier 505 and is subtracted from the right channel information at amplifier 506 and thence output as spatially enhanced left and right audio channels 511 and 512, respectively.
  • Filter 507 preferably has a center frequency of 500 Hz with 6 dB/octave slopes. As previously mentioned, it has been determined that the center frequency can fall within the range of about 300 Hz to 3,000 Hz.
  • Outputs 511 and 512 may then be conveyed to the inputs of the power amplifier of the consumer quality audio apparatus and thence to loudspeakers, in the usual fashion.
  • the listener controls the amount of enhancement added by adjusting potentiometer 503. If the wiper of potentiometer 503 is put to the ground side, then the stereo audio program will be heard with its usual un-enhanced sound. However, as the wiper of potentiometer 503 is adjusted to communicate more and more of the L-R signal to the band pass filter 507, more and more additional spatially processed stereo is perceived by the listener.
  • the listener happens to be watching a sporting contest on television which is broadcast with stereo sound, by adjusting potentiometer 503, the listener will start to perceive that he or she is actually sitting in the stadium where the sporting contest is occurring due to the additional spatial effects which are perceived and interpreted by the listener.
  • circuitry of Figure 15 is shown with essentially discreet components with the exception of the amplifiers, which are preferably National Semiconductor model LM837 devices. However, those skilled in the art, will appreciate that all (or most) of the circuit 500 can be reduced to a single silicon chip, if desired. Those skilled in the art will also appreciate that capacitors C1 and C2 in band pass filter 507 will tend to be rather large if implemented on the chip and, therefore, it may be desirable to provide appropriate pin-outs from the chip for those devices and to use discreet devices for capacitors C1 and C2. That is basically a matter of design choice.
  • the sixth embodiment of the invention which was described with reference to Figure 15, may be used in consumer quality electronics equipment.
  • the present invention can also be used professionally when recording music (and other audio material) and/or when broadcasting music (and other audio material).
  • the present invention can be used to increase the spatial image of music (or other recorded material) before or after being recorded on disc, or before or after being transmitted by a broadcaster or just before being heard by a listener. It is preferable, however, that when music or other sounds are spatially enhanced in accordance with the present invention, that the material not be overly enhanced.
  • Previously described embodiments of the present invention include an automatic control circuit 118 which regulates the amount of the conditioning signal generated in order to keep the energy ratio between the sum and difference of the two spatially enhanced outputs substantially equal.
  • the second and third embodiments of the invention include control circuit 118 which effectively controls the amount of expansion which occurs.
  • the seventh embodiment of the invention is a modified version of the sixth embodiment and includes an automatic control circuit 518 for automatically controlling the amount of spatial expansion which occurs.
  • the seventh embodiment is described with reference to Figures 16, 17A and 17B and is also intended to be used in consumer quality electronics equipment as is the case with the sixth embodiment.
  • the seventh embodiment includes the automatic control circuit 518 to limit the amount of spatial energy added by the circuit of this seventh embodiment.
  • Figure 16 is a block diagram and Figures 17A and 17B may be joined together to form a schematic diagram.
  • the seventh embodiment is quite similar to the sixth embodiment, and, therefore, common reference numerals are used for common elements. Indeed, the biggest change is the addition of the aforementioned automatic control circuit 518 which controls the amount of spatial enhancement which the circuit generates. Another change is the provision of a stereo synthesis mode of operation. If the music or other audio material occurring at the inputs 501 and 502 already has a high degree of spatial energy because the music or other audio material has previously been processed in accordance with the present invention before it was received by the circuit of Figures 16, 17A and 17B, then it is not desirable to add further spatial enhancement in this circuit.
  • the control system 518 of the present invention acts to control the amount of the spatial enhancement added. If the incoming stereo music or sound is already spatially enhanced, little or no additional spatial enhancement is provided. If the incoming music or sounds have not been previously spatially enhanced, then the control system permits the spatial enhancement to occur. If the incoming material is monaural, the stereo sounds may be synthesized.
  • control system 518 of the present embodiment is conceptually similar to the automatic control circuitry 118 previously described with reference to Figures 7A, 7B and 8, but it is nevertheless described here with respect to this presently preferred consumer electronics embodiment of the invention.
  • the amount of spatial enhancement which occurs is controlled by using a potentiometer 503, which controls the amplitude of the conditioning signal, C, which appears on line 509.
  • the magnitude of the conditioning signal C is controlled by a voltage controlled amplifier 503' which is responsive to a control input on line 510.
  • the voltage controlled amplifier 503' is preferably a model 2151 device sold by That Corporation or its equivalent.
  • the conditioning signal is output on the line 560, which output can be utilized to drive an ambience or surround speaker, often located to the rear of the listener.
  • the outputs on lines 511 and 512 are sampled and added in circuitry 522 and subtracted in circuitry 520 to form sum and difference signals on lines 530 and 528, respectively. These sum and difference signals are applied to inputs of RMS detectors 524 and 526.
  • the RMS detectors are preferably model 2252 devices currently manufactured by That Corporation.
  • the output of the RMS detectors are applied to a comparator 550 whose output is coupled to a current source 551 ( Figure 16).
  • the output of the current source is applied via a diode 552 to an amplifier 544.
  • the current source 551 and amplifier 550 are provided by a single device which is called an operational transconductance amplifier 550, 551, which serves as a current source. When connected as shown in Figures 17A and 17B, it can source up to 10 microamps of current.
  • Potentiometer 534 whose wiper is connected to one side of a resistor-capacitor network 553 allows the user to control the amount of spatial enhancement which they desire the circuit to produce.
  • Resistor-capacitor network 553 controls the rate at which the spatial enhancement can be changed by the output of current generator 551.
  • the current flowing through diode 552 alters the voltage across resistor-capacitor network 553 which is fed to amplifier 554.
  • the automatic control circuit 518 functions to limit the amount of spatial energy which the circuit can add to the signals on the left and right channels, that is to say, it does not allow the user to overly spatially enhance the music material. The user can, however, use less spatial enhancement, if they so desire, by adjusting potentiometer 534.
  • the output from the resistor-capacitor network 553 is applied via a switch 581 to a high impedance buffer amplifier 554 whose output goes to the control input of the voltage controlled amplifier 503'.
  • Resistor-capacitor network 553 in combination with the current source 551, controls the ballistics of the circuitry, that is, the number of decibels per second change invoked through voltage controlled amplifier 503'.
  • the present invention can also be used to synthesize stereo when the music or other sounds inputted at inputs 501 and 502 is monaural material.
  • a signal on line 580 causes switches 581, 582 and 583 to change position from that shown in the drawing.
  • the input to buffer amplifier 554 is then a bias voltage which is preferably provided by a voltage divider network 584.
  • differential amplifier 504 has one of its inputs grounded via switch 582 and the other input continues to receive monaural information, which is assumed to be applied to both inputs 501 and 502.
  • differential amplifier 504 is shown as being grounded via switch 582, but it could be the other input, if desired. Additionally, switch 583 adjusts the gain of amplifier 502 in order to keep the output channels in subjective balance in the stereo synthesis mode.
  • switch 530 mutes or turns off the conditioning signal, so that no spatial enhancement is added by circuitry 518 when switch 530 is opened.
  • Switch 530 can be a manual switch as shown, or it can be an electrically operated switch, such as a transistor.
  • the signal on line 580 might be controlled by the stereo detection circuitry of a conventional radio, for example, to change the positions of switches 581, 582 and 583, when no stereo signal is present, to cause stereo to be synthesized.
  • this circuit should desirably not try to spatially enhance the signal in the manner as done with a stereo signal, i.e., without changing the signal on line 580 since the circuitry enhances the difference information which, in terms of monaural information, is noise.
  • Capacitors 516 and 517 form high pass filter that limit the action of the automatic control circuit 518 to those frequencies above the extremely low bass, i.e., above 100 Hz. This is desirable because no significant spatial information exists below 100 Hz.
  • the band pass filter 507 comprises relatively large capacitors which are preferably implemented off the chip, given their sizes.
  • Band pass filter 507 is preferably centered at 500 Hz.
  • Capacitors 516 and 517, which couple outputs 511 and 512 to the differencing 520 and adding 522 circuit arrangements, are also preferably implemented off-chip given their size.
  • RMS detectors 524 and 526 have some relatively large capacitors associated with them which are similarly preferably implemented off-chip.
  • the distortion trim pot 585 for VCA 503' is also implemented off-chip.
  • a large number of the components can be implemented on a single chip, and therefore, this embodiment of the present invention can be implemented very conveniently and inexpensively for consumer grade stereo audio equipment.
  • the embodiment of Figure 15 (or of Figures 16, 17a and 17b) can be used to enhance monaural information if it is (or they are) modified as shown in Figure 18.
  • potentiometers 808 and 810 have been connected at the outputs 812 and 828 (which correspond to outputs 511 and 512 in the environment of Figure 15). These potentiometers are also gained for counter operation as explained above with reference to potentiometers 804 and 806.
  • the action of the manipulation apparatus and method depends upon the existence of a difference between the input signals applied at inputs 822 and 824 (which correspond to +L and +R on Figure 15).
  • the difference may be spectral and/or temporal.
  • spectral differences it is meant that the distribution of energy over the sound spectrum differs between the left and right signals.
  • temporal differences it is meant that the synchronization of the two input signals may be offset in time with respect to the period of each signal.
  • the conditioning signal appears, for example, on line 509 of the embodiment of Figure 15.
  • the conditioning signal can cause the sound emanating from two loudspeakers, located to the left and right front of the listener and coupled via amplifiers to outputs 818 and 820, to actually appear, to the listener, to originate from behind the listener, or at other locations in the listener's audio space, depending upon the content of the conditioning signal, C.
  • a monophonic signal when applied to the left and right channels, has equal spectral and temporal quality, even under broad-band conditions.
  • both inputs 822 and 824 receive the same signal.
  • the enhancement effect of the described manipulation system is at its minimum and approaches zero, provided that the resistance added by potentiometers 804 and 806 is the same in both the left and right inputs.
  • a broadband spectral difference is realized by changing the value of potentiometer 804 compared to 806.
  • potentiometers 804 and 806 are adjusted so as to provide a larger signal at input 822 and a smaller signal at input 824.
  • the listener will hear (i) a louder signal from the left loudspeaker and a softer signal from the right loudspeaker by virtue of that part of the circuit that directly routes input signals to the outputs, and the listener will simultaneously hear (i) a sound at the antiphasic position produced by the left and right loudspeakers by virtue of that part of the circuit that routes half of the conditioning signal, C, inverted, to the right loudspeaker.
  • potentiometers 804 and 806 are counter rotated will move the virtual image, described above, to image along an arc or semicircle extending from the physical location of the left loudspeaker to the back of the listener's head.
  • potentiometers 804 and 806 of 800 are counter rotated in the opposite direction so that the right signal is greater than the left signal at the inputs 822, 824, an inverse situation from that described above will occur.
  • the circuit of Figure 18 can be said to be symmetrical in its broad-band spectral imbalanced imaging abilities.
  • a monophonic (i.e., monaural) signal is supplied to inputs 814 and 816 at equal intensity.
  • potentiometer 804 is adjusted to be fully OFF and potentiometer 806 is adjusted to be fully ON.
  • the signal at input 822 will be routed directly to output 826 and reproduced over the left loudspeaker.
  • a conditioning signal, C will be generated by the circuitry and will be added to the left output signal 826 and inverted at the right output 828. Since no right signal is present at input 824 no output signal will be directly routed through to it. Only the inverted conditioning signal will be present at output 828.
  • the left loudspeaker will reproduce the left input signal and half of the conditioning signal while the right loudspeaker will reproduce the other half of the conditioning signal, which, by definition, is inverted relative to the left signal.
  • the listener will hear the sound imaged at a point approximately 140 degrees from a center point (straight forward) of zero degrees.
  • the equalizer connected to input 814 is peaked at 1000 Hz and that the equalizer connected to input 816 is flat or non-peaked.
  • the conditioning signal control potentiometer 812 set at a normal level, and loudspeakers connected to the outputs 818 and 820 via amplifiers, a listener positioned between the two loudspeakers will hear all broad-band frequencies at a mid-point between the two loudspeakers and frequencies in the 1000 Hz band image beyond the left loudspeaker at approximately 100 degrees left from center.
  • input 814 receives the peaked signal and input 816 the non-peaked input, both originating as broad- band noise. Since the difference between the two inputs occurs only in the 1000 Hz range, the side-chain of the circuit (which side chain generates the conditioning signal, C), controlled by potentiometer 812, will contain only this narrow band of frequencies clustered about the 1000 Hz band.
  • Output 818 will contain the peaked signal as it is passed through the circuit by its internal circuitry.
  • Output 820 will contain both the inverted signal from the peaked equalizer and the non-inverted signal directly from input 816 as routed through the circuit. Note that the peaked frequency band is made to produce a signal at outputs 818 and 820 of equal intensity but opposite polarity.
  • the circuit 800 can be said to be symmetrical with respect to its differences due to selective spectral imbalance.
  • the signals at inputs 822 and 824 are also routed directly to outputs 826 and 828, respectively, as shown in Figure 15.
  • a broad-band temporal difference realized by delaying signals presented at input 822 and not at input 824 results not only in a conditioning signal, C, at 812 being presented at outputs 826 and 828 with equal intensity and opposite polarity, but it also realizes a difference in time of the signal at outputs 826 and 828 proportional to the time difference presented at inputs 822 and 824.
  • circuit of Figure 18 can be said to be symmetrical in its broad-band temporal imbalanced imaging abilities.
  • each equalizer may be adjusted so as to send peaked portions of the sound spectrum to be delayed, and after delay mixed back into the original broad-band white noise signal through the second equalizer set with a dip at 1000 Hz which is in opposition with the first equalizer's peak but adjusted so as to produce an equal intensity across the sound spectrum except with a portion of the spectrum at 1000 Hz delayed with respect to the other portions of the spectrum.
  • equalizer-delay-equalizer arrangement can be said to provide a selective temporal imbalance.
  • equalizer-delay-equalizer arrangement described above is connected to inputs 814 and 816.
  • equalizer-delay-equalizer connected to input 814 is peaked at 1000 Hz and that the equalizer-delay-equalizer connected to input 816 is flat or non-peaked.
  • the conditioning signal control potentiometer 812 set at a normal level, and loudspeakers connected to the outputs 818 and 820 (via amplifiers), a listener positioned between the two loudspeakers will hear all broad-band frequencies in the 1000 Hz band image beyond the right loudspeaker at approximately 100 degrees right from center.
  • input 814 receives broad-band white noise with the 1000 Hz band delayed signal and input 816 the non-delayed input of broad-band white noise. Since the time- difference between the two inputs occurs only in the 1000 Hz range, the side-chain of 802, controlled by potentiometer 812, will contain only this narrow band of frequencies clustered about the 1000 Hz band.
  • Output 818 will contain the delayed signal as it is passed through the circuit.
  • Output 820 will contain both the inverted signal from the delayed equalizer and the non-inverted signal from input 816 as routed through the circuit. Note that the delayed frequency band is made to produce a signal at outputs 818 and 820 of equal intensity but of opposite polarity.
  • the circuit can be said to be symmetrical with respect to its differences due to selective temporal imbalance.
  • FIG. 19 A useful variation of the Eighth Embodiment is shown in Figure 19, and it comprises a ninth embodiment of the invention. This embodiment involves the use of multiple inputs.
  • Buses 830 and 832 are extensions from inputs 822 and 824 of the circuit of Figures 15 and 18 and said buses accommodate a plurality of panpots.
  • inputs 814A, 814B, 814C, 814D and 814E are all connected to the left side of the respective panpots shown which in turn are connected to bus 830.
  • Inputs 816A, 816B, 816C, 816D and 816E are all connected to the right side of respective panpots shown which in turn are connected to bus 832.
  • the engineer has control over the selective and broad-band spectral and/or temporal content of the various instrumental elements comprising a musical production.
  • the engineer can exhibit a keen degree of control over the image position of those elements; an image position which includes as its field of control that portion of the sound field which extends beyond the physical location of the two loudspeakers and to a point of at least 140 degrees from center in both directions.
  • the engineer can move any element in seamless progression along an arc extending from mid-point between the two loudspeakers to a point of at least 140 degrees from the center of the two loudspeakers to either the left or the right of the two stereo loudspeakers.
  • Any number of monophonic signals may be simultaneously inputted into any number of inputs under the above arrangement, and each signal will be treated by the manipulation system and apparatus as if independent, that is to say, the manipulation of one signal will not influence the treatment of any other inputted signals.
  • Any number of stereo pairs may be simultaneously inputted into any two inputs under the above arrangement, and each stereo signal will be treated by the manipulation system and apparatus as if independent.
  • any number of stereo pairs and mono inputs may be simultaneously inputted into any combination of inputs under the above arrangement, and each input, be it mono or stereo, will be treated by the manipulation system and apparatus as if independent.

Abstract

Le système et l'appareil de manipulation reçoivent des signaux électroniques qui doivent être traités pour devenir des signaux audio stéréophoniques améliorés émanant de deux haut-parleurs espacés latéralement et placés en face de l'auditeur, soit directement avant l'enregistrement et/ou la radiodiffusion, soit après l'enregistrement, ou encore après leur radiodiffusion. Le système et l'appareil traitent ces signaux pour produire un signal de conditionnement, tel que celui qui serait produit par des limites virtuelles de la pièce d'écoute, qui est entendu conjointement avec les signaux originaux de sorte qu'une zone d'écoute élargie est perçue par l'auditeur. Grâce à un réglage d'amplitude et de phase du signal parvenant aux deux hauts-parleurs, le système et l'appareil constituent un moyen permettant de maîtriser le champ sonore amélioré. Ce champ sonore amélioré est perçu par l'auditeur comme s'inscrivant dans des limites plus étendues que celles normalement reproduites par des enceintes stéréophoniques. Le système et l'appareil génèrent, par la création de limites fictives, un signal de conditionnement permettant d'améliorer les qualités spatiales naturelles présentes dans des signaux stéréo mais généralement masquées dans l'environnement acoustique de la reproduction, tout en créant des qualités spatiales artificielles. L'appareil peut contrôler son propre signal de sortie et supprimer ou réduire les effets si le signal de sortie renferme des qualités qui ne peuvent être diffusées. L'appareil permet un réglage automatique du système électronique pour maintenir à une valeur constante l'inversion de masquage spatial, quel que soit le matériau sonore de l'émission.
PCT/US1993/012688 1990-01-09 1993-12-30 Appareil de manipulation de l'image sonore et procede pour ameliorer cette image sonore WO1994016538A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1019950702676A KR960700620A (ko) 1990-01-09 1993-12-30 음향 이미지 향상을 위한 음향 이미지 자동 조작장치 및 방법(sound image manipulation apparatus and method for sound image enhancement)
AT94907123T ATE183050T1 (de) 1992-12-31 1993-12-30 Schallbildverarbeitungsvorrichtung zur schallbildverbesserung
AU60811/94A AU6081194A (en) 1992-12-31 1993-12-30 Sound image manipulation apparatus and method for sound image enhancement
DE69325922T DE69325922D1 (de) 1992-12-31 1993-12-30 Schallbildverarbeitungsvorrichtung zur schallbildverbesserung
JP6516471A JPH08509104A (ja) 1992-12-31 1993-12-30 音像操作装置及び音像エンハンスメント方法
EP94907123A EP0677235B1 (fr) 1992-12-31 1993-12-30 Appareil de manipulation d' une image sonore pour ameliorer cette image sonore

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
USPCT/US92/11335 1992-12-31
PCT/US1992/011335 WO1994016537A1 (fr) 1990-01-09 1992-12-31 Appareil et procede de manipulation stereophonique destines a ameliorer l'image sonore

Publications (1)

Publication Number Publication Date
WO1994016538A1 true WO1994016538A1 (fr) 1994-07-21

Family

ID=22231683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/012688 WO1994016538A1 (fr) 1990-01-09 1993-12-30 Appareil de manipulation de l'image sonore et procede pour ameliorer cette image sonore

Country Status (10)

Country Link
EP (1) EP0677235B1 (fr)
JP (1) JPH08509104A (fr)
CN (1) CN1091889A (fr)
AT (1) ATE183050T1 (fr)
AU (3) AU3427393A (fr)
CA (1) CA2153062A1 (fr)
DE (1) DE69325922D1 (fr)
IL (2) IL104665A (fr)
SG (1) SG70557A1 (fr)
WO (1) WO1994016538A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998020709A1 (fr) * 1996-11-07 1998-05-14 Srs Labs, Inc. Systeme d'amplification acoustique a canaux multiples pouvant etre utilise pour l'enregistrement et la lecture et procedes de mise en oeuvre dudit systeme
WO1999020006A2 (fr) * 1997-10-14 1999-04-22 Crystal Semiconductor Corp. Circuits audio monopuce, procedes et systemes utilisant lesdits circuits
US5995631A (en) * 1996-07-23 1999-11-30 Kabushiki Kaisha Kawai Gakki Seisakusho Sound image localization apparatus, stereophonic sound image enhancement apparatus, and sound image control system
US6091894A (en) * 1995-12-15 2000-07-18 Kabushiki Kaisha Kawai Gakki Seisakusho Virtual sound source positioning apparatus
GB2353193A (en) * 1999-06-22 2001-02-14 Yamaha Corp Sound processing
WO2001026422A2 (fr) * 1999-10-04 2001-04-12 Srs Labs, Inc. Dispositif de correction acoustique
US6314330B1 (en) 1997-10-14 2001-11-06 Cirrus Logic, Inc. Single-chip audio system power reduction circuitry and methods
US6507657B1 (en) 1997-05-20 2003-01-14 Kabushiki Kaisha Kawai Gakki Seisakusho Stereophonic sound image enhancement apparatus and stereophonic sound image enhancement method
US7277767B2 (en) 1999-12-10 2007-10-02 Srs Labs, Inc. System and method for enhanced streaming audio
CN103152671A (zh) * 2013-03-15 2013-06-12 珠海市杰理科技有限公司 音频输入输出电路
US9088858B2 (en) 2011-01-04 2015-07-21 Dts Llc Immersive audio rendering system
US9232312B2 (en) 2006-12-21 2016-01-05 Dts Llc Multi-channel audio enhancement system
RU2656986C1 (ru) * 2014-06-26 2018-06-07 Самсунг Электроникс Ко., Лтд. Способ и устройство для рендеринга акустического сигнала и машиночитаемый носитель записи
US10057702B2 (en) 2015-04-24 2018-08-21 Huawei Technologies Co., Ltd. Audio signal processing apparatus and method for modifying a stereo image of a stereo signal

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100206333B1 (ko) * 1996-10-08 1999-07-01 윤종용 두개의 스피커를 이용한 멀티채널 오디오 재생장치및 방법
US6937737B2 (en) * 2003-10-27 2005-08-30 Britannia Investment Corporation Multi-channel audio surround sound from front located loudspeakers
CN107071657A (zh) * 2005-01-13 2017-08-18 环境医疗有限责任公司 环境疗法记录与回放系统和记录与回放治疗音频的方法
EP1801786B1 (fr) * 2005-12-20 2014-12-10 Oticon A/S Système audio avec délai temporel variable et procédé de traitement des signaux audio.
US8045717B2 (en) * 2006-04-13 2011-10-25 Media Tek Inc. Stereo decoder and method for processing pilot signal
CN105407443B (zh) 2015-10-29 2018-02-13 小米科技有限责任公司 录音方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4630298A (en) * 1985-05-30 1986-12-16 Polk Matthew S Method and apparatus for reproducing sound having a realistic ambient field and acoustic image
US5040219A (en) * 1988-11-05 1991-08-13 Mitsubishi Denki Kabushiki Kaisha Sound reproducing apparatus
US5056149A (en) * 1987-03-10 1991-10-08 Broadie Richard G Monaural to stereophonic sound translation process and apparatus
EP0479395A2 (fr) * 1986-03-27 1992-04-08 SRS LABS, Inc. Système de rehaussement d'effet stéréo
US5153362A (en) * 1989-10-04 1992-10-06 Yamaha Corporation Electronic musical instrument having pan control function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4630298A (en) * 1985-05-30 1986-12-16 Polk Matthew S Method and apparatus for reproducing sound having a realistic ambient field and acoustic image
EP0479395A2 (fr) * 1986-03-27 1992-04-08 SRS LABS, Inc. Système de rehaussement d'effet stéréo
US5056149A (en) * 1987-03-10 1991-10-08 Broadie Richard G Monaural to stereophonic sound translation process and apparatus
US5040219A (en) * 1988-11-05 1991-08-13 Mitsubishi Denki Kabushiki Kaisha Sound reproducing apparatus
US5153362A (en) * 1989-10-04 1992-10-06 Yamaha Corporation Electronic musical instrument having pan control function

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091894A (en) * 1995-12-15 2000-07-18 Kabushiki Kaisha Kawai Gakki Seisakusho Virtual sound source positioning apparatus
US5995631A (en) * 1996-07-23 1999-11-30 Kabushiki Kaisha Kawai Gakki Seisakusho Sound image localization apparatus, stereophonic sound image enhancement apparatus, and sound image control system
WO1998020709A1 (fr) * 1996-11-07 1998-05-14 Srs Labs, Inc. Systeme d'amplification acoustique a canaux multiples pouvant etre utilise pour l'enregistrement et la lecture et procedes de mise en oeuvre dudit systeme
US6507657B1 (en) 1997-05-20 2003-01-14 Kabushiki Kaisha Kawai Gakki Seisakusho Stereophonic sound image enhancement apparatus and stereophonic sound image enhancement method
WO1999020006A3 (fr) * 1997-10-14 1999-08-26 Crystal Semiconductor Corp Circuits audio monopuce, procedes et systemes utilisant lesdits circuits
WO1999020006A2 (fr) * 1997-10-14 1999-04-22 Crystal Semiconductor Corp. Circuits audio monopuce, procedes et systemes utilisant lesdits circuits
US6314330B1 (en) 1997-10-14 2001-11-06 Cirrus Logic, Inc. Single-chip audio system power reduction circuitry and methods
US6373954B1 (en) 1997-10-14 2002-04-16 Cirrus Logic, Inc. Single-chip audio circuitry, method, and systems using the same
US6405093B1 (en) 1997-10-14 2002-06-11 Cirrus Logic, Inc. Signal amplitude control circuitry and methods
US6628999B1 (en) 1997-10-14 2003-09-30 Cirrus Logic, Inc. Single-chip audio system volume control circuitry and methods
GB2353193A (en) * 1999-06-22 2001-02-14 Yamaha Corp Sound processing
GB2353193B (en) * 1999-06-22 2004-08-25 Yamaha Corp Sound processing method and apparatus
US7162045B1 (en) 1999-06-22 2007-01-09 Yamaha Corporation Sound processing method and apparatus
WO2001026422A2 (fr) * 1999-10-04 2001-04-12 Srs Labs, Inc. Dispositif de correction acoustique
WO2001026422A3 (fr) * 1999-10-04 2001-11-29 Srs Labs Inc Dispositif de correction acoustique
US7277767B2 (en) 1999-12-10 2007-10-02 Srs Labs, Inc. System and method for enhanced streaming audio
US7467021B2 (en) 1999-12-10 2008-12-16 Srs Labs, Inc. System and method for enhanced streaming audio
US8046093B2 (en) 1999-12-10 2011-10-25 Srs Labs, Inc. System and method for enhanced streaming audio
US9232312B2 (en) 2006-12-21 2016-01-05 Dts Llc Multi-channel audio enhancement system
US9154897B2 (en) 2011-01-04 2015-10-06 Dts Llc Immersive audio rendering system
US9088858B2 (en) 2011-01-04 2015-07-21 Dts Llc Immersive audio rendering system
US10034113B2 (en) 2011-01-04 2018-07-24 Dts Llc Immersive audio rendering system
CN103152671A (zh) * 2013-03-15 2013-06-12 珠海市杰理科技有限公司 音频输入输出电路
CN103152671B (zh) * 2013-03-15 2016-09-28 珠海市杰理科技有限公司 音频输入输出电路
RU2656986C1 (ru) * 2014-06-26 2018-06-07 Самсунг Электроникс Ко., Лтд. Способ и устройство для рендеринга акустического сигнала и машиночитаемый носитель записи
US10484810B2 (en) 2014-06-26 2019-11-19 Samsung Electronics Co., Ltd. Method and device for rendering acoustic signal, and computer-readable recording medium
US10057702B2 (en) 2015-04-24 2018-08-21 Huawei Technologies Co., Ltd. Audio signal processing apparatus and method for modifying a stereo image of a stereo signal

Also Published As

Publication number Publication date
IL104665A (en) 1998-10-30
AU6081194A (en) 1994-08-15
CA2153062A1 (fr) 1994-07-21
IL122164A0 (en) 1998-04-05
DE69325922D1 (de) 1999-09-09
CN1091889A (zh) 1994-09-07
JPH08509104A (ja) 1996-09-24
EP0677235A1 (fr) 1995-10-18
AU3427393A (en) 1994-08-15
IL104665A0 (en) 1993-06-10
SG70557A1 (en) 2000-02-22
AU7731098A (en) 1998-10-01
EP0677235B1 (fr) 1999-08-04
ATE183050T1 (de) 1999-08-15

Similar Documents

Publication Publication Date Title
EP0677235B1 (fr) Appareil de manipulation d' une image sonore pour ameliorer cette image sonore
US5896456A (en) Automatic stereophonic manipulation system and apparatus for image enhancement
US5555306A (en) Audio signal processor providing simulated source distance control
US4841572A (en) Stereo synthesizer
JP4657452B2 (ja) 擬似立体音響出力をモノラル入力から合成する装置および方法
US4356349A (en) Acoustic image enhancing method and apparatus
Theile Multichannel natural music recording based on psychoacoustic principles
US5173944A (en) Head related transfer function pseudo-stereophony
EP0965247B1 (fr) Systeme d'amplification acoustique a canaux multiples pouvant etre utilise pour l'enregistrement et la lecture et procedes de mise en oeuvre dudit systeme
US5043970A (en) Sound system with source material and surround timbre response correction, specified front and surround loudspeaker directionality, and multi-loudspeaker surround
US20030007648A1 (en) Virtual audio system and techniques
US4706287A (en) Stereo generator
CA2184160A1 (fr) Synthese binaurale, fonctions de transfert concernant une tete, et leurs utilisations
US5222059A (en) Surround-sound system with motion picture soundtrack timbre correction, surround sound channel timbre correction, defined loudspeaker directionality, and reduced comb-filter effects
JPS63183495A (ja) 音場制御装置
WO2017165968A1 (fr) Système et procédé pour créer un audio binaural tridimensionnel à partir de sources sonores stéréo, mono et multicanaux
US5056149A (en) Monaural to stereophonic sound translation process and apparatus
JP4196509B2 (ja) 音場創出装置
US5748745A (en) Analog vector processor and method for producing a binaural signal
WO1994016537A1 (fr) Appareil et procede de manipulation stereophonique destines a ameliorer l'image sonore
WO1998054927A1 (fr) Procede et systeme d'amelioration de l'image sonore creee par un signal sonore
EP0323830B1 (fr) Système sonore à effet spatial
WO2001060118A1 (fr) Dispositif de fantomisation de voie centrale audio
AU751831B2 (en) Method and system for recording and reproduction of binaural sound
Eargle Two-Channel Stereo

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1994907123

Country of ref document: EP

ENP Entry into the national phase

Ref country code: US

Ref document number: 1995 464669

Date of ref document: 19950629

Kind code of ref document: A

Format of ref document f/p: F

Ref country code: CA

Ref document number: 2153062

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 2153062

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 1994907123

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1994907123

Country of ref document: EP