US11170752B1 - Phased array speaker and microphone system for cockpit communication - Google Patents

Phased array speaker and microphone system for cockpit communication Download PDF

Info

Publication number
US11170752B1
US11170752B1 US15/929,383 US202015929383A US11170752B1 US 11170752 B1 US11170752 B1 US 11170752B1 US 202015929383 A US202015929383 A US 202015929383A US 11170752 B1 US11170752 B1 US 11170752B1
Authority
US
United States
Prior art keywords
communication system
pilot
signal processing
processing circuit
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/929,383
Other versions
US20210343267A1 (en
Inventor
Scott BOHANAN
Vincent DeChellis
Tongan Wang
Joseph Salamone
Jim Jordan
Andrew N. Durrence
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gulfstream Aerospace Corp
Original Assignee
Gulfstream Aerospace Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gulfstream Aerospace Corp filed Critical Gulfstream Aerospace Corp
Priority to US15/929,383 priority Critical patent/US11170752B1/en
Assigned to GULFSTREAM AEROSPACE CORPORATION reassignment GULFSTREAM AEROSPACE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DECHELLIS, VINCENT, SALAMONE, JOSEPH, WANG, TONGAN, BOHANAN, Scott, JORDAN, JIM, DURRENCE, ANDREW N.
Priority to EP21170824.3A priority patent/EP3905715A1/en
Priority to CN202110470482.7A priority patent/CN113573210B/en
Publication of US20210343267A1 publication Critical patent/US20210343267A1/en
Application granted granted Critical
Publication of US11170752B1 publication Critical patent/US11170752B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/18Methods or devices for transmitting, conducting or directing sound
    • G10K11/26Sound-focusing or directing, e.g. scanning
    • G10K11/34Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
    • G10K11/341Circuits therefor
    • G10K11/346Circuits therefor using phase variation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D11/00Passenger or crew accommodation; Flight-deck installations not otherwise provided for
    • B64D11/0015Arrangements for entertainment or communications, e.g. radio, television
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D43/00Arrangements or adaptations of instruments
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0004Transmission of traffic-related information to or from an aircraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/403Linear arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/405Non-uniform arrays of transducers or a plurality of uniform arrays with different transducer spacing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2203/00Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
    • H04R2203/12Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles

Definitions

  • the disclosure relates generally to two-way communication systems. More particularly, the disclosure relates to a speaker and microphone configuration to allow pilots to communicate with air traffic control towers and other parties.
  • the cockpit of an aircraft can be quite a noisy environment. At potentially numerous times throughout the flight the pilot or copilot, seated in this noisy environment, will need to communicate with each other, and to receive and communicate important information by radio with air traffic control (ATC), clearly and accurately so that all parties understand.
  • ATC air traffic control
  • the headset has the advantage of delivering the air traffic control instructions directly to the pilot's (and copilot's) ears and transmitting the pilot's or copilot's communications back to ATC through a close-talk microphone positioned near the pilot's or copilot's mouth.
  • headsets comprising noise isolating or active noise cancelling headphones to which a boom microphone is attached.
  • Such headsets typically employ a close-talk microphone having a pickup pattern designed to pick up the pilot's voice while rejecting sounds originating from other directions.
  • the pilot and copilot may need to take the headsets off in order to hold conversations with others within the cockpit, such as flight attendants or other personnel, who are not also wearing headphones.
  • the headset serves a highly important communication function, but it is not the only system within the aircraft that produces audio sound.
  • Aircraft are also equipped with an alert-signal system, which broadcasts alerts through the flight deck speaker system in all directions.
  • the alert system is necessarily designed to be quite loud, so that it can be heard by pilot and copilot over the ambient noise within the cockpit.
  • all doors between cockpit and cabin are required to remain open. Thus, these alert signals transmit through the cabin easily, causing unnecessary disturbances to the passengers.
  • the disclosed pilot communication system takes a different approach that reduces pilot and copilot reliance on headsets to communicate with each other and with air traffic control (ATC).
  • ATC air traffic control
  • the system provides an enhanced signal-to-noise ratio (SNR), so the pilot and copilot can readily hear conversations, ATC communications and alert sounds, without disturbing passengers in the cabin, even when the cockpit-cabin doors are open.
  • SNR signal-to-noise ratio
  • the system uses a phased array technique to direct the speaker audio to the pilot's and copilot's ears, and uses a similar phased array technique to focus the microphone pickup pattern directly at the pilot's and copilot's lips.
  • the received speaker audio sounds are much louder to them than elsewhere in the cockpit or cabin, and their voices are picked up with much less inclusion of ambient noise.
  • the disclosed pilot communication system is adapted for use in an aircraft cockpit that defines an acoustic space with at least one pilot seating location disposed therein, and that includes an avionics communication system.
  • the pilot communication system includes a transducer array comprising a plurality of individual acoustic transducers, disposed in a spaced relation to one another and combined for deployment within the cockpit.
  • Each of the plurality of acoustic transducers converts between sound information expressed as an electrical signal and sound information expression as an acoustic wave.
  • a signal processing circuit has an input port that receives sound information and an output port that supplies sound information after being processed by the signal processing circuit.
  • the input port is configured for coupling to one of: (a) the microphone array and (b) the avionics communication system.
  • the output port is configured to couple to the other of: (a) the speaker array and (b) the avionics communication system.
  • the signal processing circuit is coupled to the transducer array to electrically interface with each of the plurality of transducers individually.
  • the signal processing circuit selectively inserts a time delay associated with at least some of the plurality of individual transducers to form a coverage beam within the acoustic space of the cockpit.
  • the signal processing circuit selectively controls the time delays associated with the at least some of the plurality of individual transducers to steer the coverage beam in the direction of the pilot seating location.
  • the sound information can be subdivided into different frequency ranges or bands, which are individually processed by the signal processing circuit. Such frequency subdivision provides more effective steering of the coverage beam.
  • FIG. 1 is a perspective view of the cockpit and flight deck of an exemplary aircraft, showing one possible placement of the pilot communication system (phased array communication system);
  • pilot communication system phased array communication system
  • FIGS. 2A-2F are plan views of some different speaker-microphone placement embodiments usable with the pilot communication system
  • FIG. 3 is diagram illustrating how an inserted time delay is used to direct speaker and microphone beam patterns in the pilot communication system
  • FIGS. 4A and 4B are graphs showing, respectively, the azimuth and elevation views of a single acoustic transducer (not phased array);
  • FIGS. 5A and 5B are graphs showing, respectively, the azimuth and elevation views of plural acoustic transducers in a phased array configuration
  • FIG. 6 is a circuit diagram of a single frequency band, signal processing circuit
  • FIG. 7 is a circuit diagram of a multiple frequency band, signal processing circuit.
  • FIG. 8 is a circuit diagram illustrating the signal processing circuit interface to the microphone array.
  • the pilot communication (phased array communication system) 12 may be located at a suitable position forward of the pilot and copilot seating positions, preferably so that the speaker array 14 and microphone array 16 are within line of sight of the pilot's and copilot's heads.
  • the pilot head positions are illustrated at 20 .
  • the speaker array 14 defines an acoustic beam 22 , in this example directed at one pilot and the microphone array 16 defines an acoustic beam microphone pickup pattern 24 , in this example directed at the other pilot.
  • the acoustic beams 22 and 24 are steerable by the signal processing circuit 18 , and thus the speaker and microphone beam patterns can be aimed at either (or both) pilot and copilot.
  • the pilot communication system 12 generally comprises a plurality of speakers and a plurality of microphones (collectively referred to herein as acoustic “transducers”) each arranged in a predefined spaced apart layout.
  • these speakers and microphones are coupled to a signal processing circuit 18 that supplies phase-controlled audio signals to the speakers and receives phase-controlled audio signals from the microphones.
  • the signal processing circuit 18 may be implemented using a field programmable gate array (FPGA), microprocessor, a microcontroller, a digital signal processor (DSP), or a combination thereof.
  • the plurality of speakers and plurality of microphones each operate, as a phased array system that produces a beam pattern dictated by the locations of the transducers, and further dictated by the signal time delays to each transducer as governed by the signal processing circuit 18 .
  • the pilot communication system may be coupled to the avionics communication system 11 , which provides communication with air traffic control (ATC) and also provides signal routing to allow the pilots to communicate with one another and with flight attendants, and to broadcast messages to the passengers.
  • ATC air traffic control
  • a typical embodiment of the pilot communication system will include both a speaker array 14 , comprising plural speakers, and a microphone array 16 , comprising plural microphones.
  • the speakers and microphones are physically arranged in a predetermined configuration pattern—a property that bears upon the amount of delay introduced by the signal processing circuit 18 .
  • FIGS. 2A through 2F Several different spaced-apart transducer configuration patterns have been illustrated in FIGS. 2A through 2F . Other configuration patterns are also possible. These configurations patterns may be used singly or in combination.
  • the transducers 26 (speakers or microphones, as noted above) are each arranged according to a predefined configuration pattern, such as but not limited to, those examples shown in FIGS. 2A-2F . Note that it is not necessary for both speakers and microphones be arranged according to the same configuration pattern. Thus, for example, the speakers could be arranged in a spiral configuration pattern, while the microphones are arranged in a square configuration pattern.
  • FIG. 2A illustrates a linear array, where the transducers are centered along a common straight line.
  • the transducers are spaced apart according to a predefined spacing pattern (e.g., equally spaced, logarithmically spaced, or with other spacing) and each is preferably fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • a predefined spacing pattern e.g., equally spaced, logarithmically spaced, or with other spacing
  • the signal processing circuit is able to send or receive a precisely timed audio signal to each transducer.
  • the effect of such precisely timed audio signals is to produce an array beam pattern.
  • the beam pattern can be controlled by the signal processing circuit to provide one-dimensional beam steering.
  • the linear array is well suited for deployment in flight deck locations where the linear array lies generally in a plane the includes the pilot's and copilot's heads.
  • FIG. 2B illustrates a curved or curvilinear array, where the transducers are centered along a common curved line.
  • the transducers may be equally spaced apart, or spaced apart with bilateral symmetry.
  • the curvilinear array is well suited for deployment in portions of the cockpit having naturally curved surfaces, as dictated by the shape of the fuselage.
  • the transducers of the curvilinear array may be fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • FIG. 2C illustrates a circular array.
  • the transducers are preferably equally spaced, each fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • the circular array is well suited for placement on the ceiling of the cockpit, such as above and forward of the pilots heads.
  • the circular array can provide two-dimensional beam steering.
  • FIG. 2D illustrates a concentric circular array.
  • the transducers are preferably equally spaced around each concentric circle, each fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • the concentric circular array is useful where the audio signals being handled by the signal processing circuit are subdivided into different frequency bands. In this regard, lower frequencies have longer wavelengths; conversely higher frequencies have shorter wavelengths.
  • the concentric circles are sized so that the spacing of the transducers are farther apart on the larger circles (better adapted to collectively reproduce or capture lower frequencies), and closer together on the smaller circles (better adapted to collectively reproduce or capture higher frequencies).
  • FIG. 2E illustrates a spiral array where the transducers 26 are spaced according to a logarithmic pattern.
  • the transducers are each fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • the spiral array provides different transducer spacings. This is a natural consequence of following a spiral pattern; however, in addition, the individual transducers are spaced apart based on a logarithmic relationship.
  • the spiral array is thus well adapted to provide range of different transducer spacings to correspond with the different frequency bands mediated by the signal processing circuit.
  • the spiral array can provide more broadband frequency response, because the signal processing circuit has more transducer spacing options to work with when sending or receiving audio to selected transducers.
  • the spiral array is able to provide two-dimensional beam steering.
  • FIG. 2F illustrates a square or rectilinear array.
  • the transducers are each fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
  • the square array is similar to the circular array, providing two-dimensional beam steering. The square array is suited in applications where packaging constraints dictate.
  • transducer spacings shown in FIG. 2A-2D are uniform, in practice, the spacing between the transducers could be constant, linear, logarithmic or based on other factors.
  • the transducers may be implemented as addressable active devices, each having a unique address.
  • all transducers may be coupled to the signal processing circuit via a control bus that supplies a data signal carrying the audio information and that carries an address signal used to notify which transducer shall act upon the supplied data signal.
  • FIG. 3 illustrates two transducers (e.g., speakers) arranged in a linear array.
  • both speakers are fed coherently with the same sinusoidal audio signal, the sound waves emanating from each speaker are in phase and the sound will appear to come straight on from the plane of the speakers, i.e., from a direction perpendicular to the horizontal axis (as seen in FIG. 3 ).
  • the signal from speaker on the left (in FIG. 3 ) is delayed by a time dt computed to account for the fact that the signal from speaker on the left (in FIG. 3 ) must traverse the additional distance d, in order for its wavefront to be in phase with the wavefront from the speaker on the right (in FIG. 3 ).
  • a phased array communication system for the pilot communication system, it can be beneficial to implement the spaced-apart speakers and the spaced-apart microphones in a common package, so that the speakers and microphones are closely spaced to one another in relation to the acoustic wavelength. This can help minimize acoustic echo, a form of circular feedback where sounds from the speakers are picked up by the microphones and rebroadcast by the speakers. Active electronic acoustic echo cancellation processing can also be included in the signal processing circuit to reduce acoustic echo. Acoustic echo is also reduced because the speakers and microphones operate with steerable beam patterns (beam forming), so that sounds broadcast by the speaker array and sounds picked up by the microphone array can each be focused on different regions of space, thus eliminating conditions for a feedback loop.
  • steerable beam patterns beam forming
  • phased array technique produces a beam pattern that can concentrate the acoustic signal into a much smaller region of space than is normally produced by a single transducer. Moreover, a phased array of transducers can focus the acoustic signal on a particular region of space by electronically adjusting the phase shift applied to each transducer.
  • FIGS. 4A and 4B show an exemplary sound pressure pattern for an individual transducer operated at a frequency in the nominal range of the human voiced speech, e.g., 500 Hz.
  • the pattern is essentially the same in azimuth and elevation and has a wide dispersal pattern. Because this pattern is generated by a single transducer, there is no phased-array beam pattern. The pattern is largely hemispherical or cardioid in shape, with little gain in any direction.
  • single acoustic transducer does exhibit a non-uniform directivity pattern. However, the directivity is not as focused and adjustable as a phased array system.
  • FIGS. 5A and 5B show the comparable sound pressure pattern for an array of transducers operated at the same frequency as FIGS. 4A an 4 B. Note the pattern in both azimuth and elevation is significantly focused. The same amount of energy that produced the pattern of FIGS. 4A and 4B is confined to a much narrower beam pattern in FIGS. 5A and 5B , thus producing a much higher sound pressure level along the beam's axis (higher gain) and lower sound energy in other directions. By using a narrower beam pattern that is steerable, the disclosed communication system is able to minimize the sound energy transmitted into the passenger cabin.
  • FIG. 6 illustrates an exemplary signal processing circuit 18 that may be used to process the audio content broadcast by the speaker transducers.
  • This embodiment treats the audio content as a single band of frequencies. It is thus well suited for speaker arrays arranged as a linear array ( FIG. 2A ) and may also be effectively used with the curvilinear array ( FIG. 2B ) and the circular array ( FIG. 2C ).
  • the audio source 30 can supply audio from a number of sources, including the avionics communication system 11 ( FIG. 1 ), which typically includes a communication radio (e.g., for air traffic control), a sound mixing and routing system used for inter-cabin and cockpit communication (e.g., between pilot and copilot, or with flight attendants and passengers), and audio from the alerting system.
  • the avionics communication system 11 FIG. 1
  • the avionics communication system 11 typically includes a communication radio (e.g., for air traffic control), a sound mixing and routing system used for inter-cabin and cockpit communication (e.g.
  • the audio source 30 carries analog audio signals, which are then digitized by the analog to digital (ADC) circuit 32 .
  • ADC analog to digital
  • the ADC supplies the digitized audio signals to a field programmable gate array (FPGA) 34 , which is configured by programming to define n-signal paths, to accommodate n-copies of the digital audio stream, where n is the number of speakers implemented in the array.
  • FPGA field programmable gate array
  • the FPGA applies a calculated delay time (which could be a null or zero-delay time), collectively designed to steer the collective beam emanating from the speaker array in a particular direction. Details of these delay calculations are discussed below.
  • the functions performed by the disclosed FPGA can also be performed using one or more microprocessors, microcontrollers, digital signal processors (DSP), application specific integrated circuits, and/or combinations thereof.
  • the FPGA outputs the digital audio streams to a multi-channel digital to analog convertor (DAC) 36 , which converts each digital audio stream into an analog signal.
  • DAC digital to analog convertor
  • the multi-channel DAC provides 16-bit resolution to match the resolution of the ADC. Other resolution bit depths are also possible.
  • the multi-channel DAC provides a number of independent channels sufficient to individually process each of the digital audio streams.
  • the audio streams are processed by a bank of low pass filters 38 .
  • the low pass filter bank 38 includes one low pass filter dedicated to each of the analog audio streams (i.e., one for each speaker in the array).
  • each filter provides a 3 db roll-off at 100 kHz.
  • the filter allows the audio signals within the human hearing range to pass without attenuation, but blocks frequencies well above that range, to prevent digital clock noise and other spurious signals from being delivered to the amplifier stage 40 . Other filter roll-off frequencies and filter slopes are also possible.
  • the amplifier stage 40 provides one channel of amplification for each of the audio signals. Each amplifier provides low distortion signal gain and impedance matching to the speaker 14 , so that suitable listening levels within the cockpit can be achieved.
  • portions or all of the multi-channel components downstream of the FPGA 34 can be bundled or packaged with each speaker, thus allowing digital audio to be distributed to the speaker array.
  • a sync signal is used to load all of the DACs at the exact same time.
  • Such sync signal is in addition to the digital audio signals provided by existing digital audio standards.
  • the time delays calculated by the signal processing circuit can be based solely on the distance from each speaker (or microphone) to the defined steer point and the speed of sound.
  • frequency subdivision helps in the off-axis behavior. Having uniformly spaced speakers/microphones transmitting the same signal will produce different constructive/deconstructive interference patterns as a function of frequency. The higher the frequency, the narrower the primary node becomes, at the expense of significantly more adverse anti-nodes. This means that small changes in position generate large amplitude variation. This can be seen as a major annoyance and distraction for the flight crews.
  • the frequency subdivision technique uses the difference in speaker/microphone position, coupled with the relative filtering, to maintain the same beam width at the steering point across the frequency range. This will, to some extent, reduce the SNR/gain of the overall array in order to preserve off-axis behavior without compromising any sound bleed back to the cabin. Thus, if the pilots move their heads away from where the steer point is, they won't perceive as drastic an amplitude variance.
  • FIG. 7 illustrates another exemplary signal processing circuit 18 embodiment that may be used to process the audio content that is subdivided into frequency bands, such as high, mid and low frequency bands.
  • This multi-band embodiment is well suited for speaker arrays that have been optimized in clusters, to favor different frequency ranges, such as the concentric array ( FIG. 2D ) and spiral array ( FIG. 2E ).
  • the audio source 30 is split on the basis of frequency by crossover network 50 into plural frequency bands. For illustration, three frequency bands or channels have been illustrated (high channel 52 , mid channel 54 , low channel 56 ). A greater or fewer number of bands may be implemented as required by the application.
  • Subdividing the audio spectrum into different frequency ranges or bands provides better control over how sounds at different frequencies may be delivered at a particular position in space with pinpoint accuracy.
  • the reason for this is that the time delays needed to steer an acoustic beam to a particular point in space are frequency dependent (wavelength dependent).
  • wavelength of the acoustic wave plays a key role in the time delay calculation, and wavelength is inversely related to frequency as a matter of fundamental physics.
  • the FPGA 34 calculates appropriate time delays for each band or range of frequencies.
  • the multi-band embodiment is better able to deliver the full spectrum of broadcast sound directly to the pilot's ears. This accuracy also helps improve intelligibility because all frequency content required for good speech intelligibility is delivered without phase error.
  • the vowel sounds in human speech tend to be lower in the speech frequency range, while the consonant sounds tend to be higher in the speech frequency range. Speech signals are more intelligible if these respective vowel and consonant sounds arrive properly synchronized when they arrive at the human ear. Having highly accurate speech reproduction can be quite important in the aircraft cockpit to overcome the masking effect caused by the high ambient noise levels during flight.
  • the high, mid and low channels 52 - 56 are converted simultaneously into digital audio streams by the simultaneous multi-channel ADC 58 . It is important that these channels are simultaneously digitized so that the digitization process does not introduce any time discrepancies among the multiple channels. The reason for this was discussed above—to ensure phase coherence among the frequency bands so that beam focus is accurate and speech intelligibility is not degraded.
  • the multiple bands (in this case high, mid, low) of digital audio are then processed by FPGA 34 , essentially as discussed above, to produce individual time delayed audio streams for each of the speakers 14 in the array.
  • the post processing stages following the FPGA 34 include the DAC 36 , the low pass filter bank 38 , and the amplifier stage 40 , which function essentially as discussed above.
  • the concentric circular array FIG. 2D
  • spiral array FIG. 2E
  • additional control over the sound delivery becomes possible.
  • the concentric circular and spiral arrays are capable of better performance by feeding them with frequency in mind.
  • the speakers in the outermost orbit of the concentric circular array may be fed by lower frequencies while the speakers closer to the center of the array are fed by higher frequencies.
  • the array can have greater control over not only sound placement (where in space the signal is heard), but also sound fidelity (the richness of tonality heard at the placement location).
  • Such frequency equalization EQ
  • the outer speakers in the linear array can be low-pass filtered, while the center ones can be high-pass filtered.
  • Individual speakers work by moving a mass of air through the pumping action of an electromagnetically driven piston or other movement producing device that is coupled to a speaker cone having a predefined surface area. Moving low frequencies (long wavelengths) requires movement of more air mass than is required for higher frequencies. This is why conventional bass frequency speakers usually have larger speaker cones than high frequency speakers.
  • a practical embodiment of a pilot communication system for an aircraft cockpit usually dictates use of smaller-sized speaker cones, because space is limited. While it is possible to implement a system using different sizes of speaker cones, such may not be practical or necessary to achieve good fidelity.
  • One big advantage the system gets from the spatial separation with low frequencies is tied to the wavelength. Longer wavelengths need more separation between speakers to achieve directionality, which is important. However, as frequency increases, the larger spatial separation between speakers causes the increased peak/lull sideband behavior. This is where the crossover filtering comes into play and is why the inner speakers handle the higher frequencies.
  • the relative distance between a grouping of speakers defines the wavelength (and conversely frequencies) at which they achieve acceptable directionality and sound field behavior.
  • the processing of signals from the microphone array 16 is handled in a similar fashion.
  • the individual microphones 62 are each fed to dedicated channels of microphone amplifier stage 64 and then fed to the simultaneous multi-channel ADC 66 which converts the microphone signals to digital signals. These digital signals are then input to the FPGA 34 .
  • the FPGA controls the sample time of each microphone input relative to other microphone inputs, thereby electronically controlling the directivity of the overall sound received by the array of microphones.
  • the received audio signal from the groups of microphones can then be digitally filtered, processed, and combined to create a customized highly directional received signal for each pilot while minimizing the noise from other sources or directions.
  • the FPGA 34 calculates the applicable time delays using the same approach for the linear arrays, circular and spiral arrays—except that linear array delays are only computed in two (x,y) dimensions.
  • the FPGA is supplied with the (x,y,z) coordinates of each speaker and microphone in space. Then within in the same (x,y,z) coordinate reference frame the FPGA is supplied with, or calculates, the steer point for both the microphone and speaker arrays. In a typical embodiment, the steer point for microphones and speakers would likely be in the direction of the pilot, but they could be different depending on the particular application.
  • the FPGA determines the distance from each speaker/microphone to the steer points and divides that distance by the speed of sound. This gives the time the sound waves will take to traverse the distance:
  • d ⁇ t x d ⁇ i ⁇ s ⁇ t 2 + y d ⁇ i ⁇ s ⁇ t 2 + z d ⁇ i ⁇ s ⁇ t 2 speed ⁇ ⁇ of ⁇ ⁇ sound
  • the difference between the various travel times then amounts to the time delays (or time advances) applied to each signal by the FPGA.
  • fixed location(s) may be used as the steer point (e.g., the nominal fixed locations of the pilots' heads.
  • the steer point is dynamically computed using image tracking.
  • image tracking may be performed by using optical or LiDAR sensing and recognition of the pilot's faces, heads, mouth, ears or the like. Other technology sensing technology may also be used.

Abstract

The pilot communication system employs a transducer array of individual spaced apart speakers and/or microphones for deployment within the cockpit. A signal processing circuit interfaces with the transducer array and also with the aircraft avionics communication system, and selectively applies different time delays to the individual speakers and/or microphones to create in the array an acoustic beam having steerable coverage within the acoustic space of the cockpit. By adjusting the time delays the signal processing circuit directs and focuses sound from the speakers to the pilot and similarly focuses the microphones on the pilot's mouth. In this way the pilots can communicate with each other and with air traffic control without the need to wear headsets. The system also significantly reduces flight deck warnings from being introduced into the cabin environment.

Description

TECHNICAL FIELD
The disclosure relates generally to two-way communication systems. More particularly, the disclosure relates to a speaker and microphone configuration to allow pilots to communicate with air traffic control towers and other parties.
BACKGROUND
This section provides background information related to the present disclosure which is not necessarily prior art.
The cockpit of an aircraft can be quite a noisy environment. At potentially numerous times throughout the flight the pilot or copilot, seated in this noisy environment, will need to communicate with each other, and to receive and communicate important information by radio with air traffic control (ATC), clearly and accurately so that all parties understand. Currently this has been done through the headset. The headset has the advantage of delivering the air traffic control instructions directly to the pilot's (and copilot's) ears and transmitting the pilot's or copilot's communications back to ATC through a close-talk microphone positioned near the pilot's or copilot's mouth.
Thus traditionally, aircraft pilots and copilots have worn headsets during flight, comprising noise isolating or active noise cancelling headphones to which a boom microphone is attached. Such headsets typically employ a close-talk microphone having a pickup pattern designed to pick up the pilot's voice while rejecting sounds originating from other directions. These have worked well, but there are problems.
One problem with conventional headsets is that they can become uncomfortable to wear, particularly for long periods of time. The ear cups on many headsets apply pressure to the sides of the face and sometimes the ears, which can interfere with blood flow if worn too tightly. The air inside the ear cups also become very warm and stale while wearing, thus pilots sometimes need to remove the headphones to give their ears some fresh air.
Also, because they block out much of the ambient cockpit sound, the pilot and copilot may need to take the headsets off in order to hold conversations with others within the cockpit, such as flight attendants or other personnel, who are not also wearing headphones.
In the conventional aircraft, the headset serves a highly important communication function, but it is not the only system within the aircraft that produces audio sound. Aircraft are also equipped with an alert-signal system, which broadcasts alerts through the flight deck speaker system in all directions. The alert system is necessarily designed to be quite loud, so that it can be heard by pilot and copilot over the ambient noise within the cockpit. However, for business jets, during takeoff and landing, all doors between cockpit and cabin are required to remain open. Thus, these alert signals transmit through the cabin easily, causing unnecessary disturbances to the passengers.
SUMMARY
The disclosed pilot communication system takes a different approach that reduces pilot and copilot reliance on headsets to communicate with each other and with air traffic control (ATC). Using a phased array speaker and microphone system, which can be frequency band segmented for greater clarity, pilots and copilots can easily communicate with each other in the noisy cockpit and can have clear and accurate communications with air traffic control, without the need to wear headsets. The system provides an enhanced signal-to-noise ratio (SNR), so the pilot and copilot can readily hear conversations, ATC communications and alert sounds, without disturbing passengers in the cabin, even when the cockpit-cabin doors are open.
Instead of filling the cockpit with loud communication system audio, sufficient to overcome the ambient noise, the system uses a phased array technique to direct the speaker audio to the pilot's and copilot's ears, and uses a similar phased array technique to focus the microphone pickup pattern directly at the pilot's and copilot's lips. Thus, from the pilot's and copilot's perspective, the received speaker audio sounds are much louder to them than elsewhere in the cockpit or cabin, and their voices are picked up with much less inclusion of ambient noise.
According to one aspect, the disclosed pilot communication system is adapted for use in an aircraft cockpit that defines an acoustic space with at least one pilot seating location disposed therein, and that includes an avionics communication system. The pilot communication system includes a transducer array comprising a plurality of individual acoustic transducers, disposed in a spaced relation to one another and combined for deployment within the cockpit. Each of the plurality of acoustic transducers converts between sound information expressed as an electrical signal and sound information expression as an acoustic wave.
A signal processing circuit has an input port that receives sound information and an output port that supplies sound information after being processed by the signal processing circuit. The input port is configured for coupling to one of: (a) the microphone array and (b) the avionics communication system. The output port is configured to couple to the other of: (a) the speaker array and (b) the avionics communication system.
The signal processing circuit is coupled to the transducer array to electrically interface with each of the plurality of transducers individually. The signal processing circuit selectively inserts a time delay associated with at least some of the plurality of individual transducers to form a coverage beam within the acoustic space of the cockpit. The signal processing circuit selectively controls the time delays associated with the at least some of the plurality of individual transducers to steer the coverage beam in the direction of the pilot seating location.
If desired the sound information can be subdivided into different frequency ranges or bands, which are individually processed by the signal processing circuit. Such frequency subdivision provides more effective steering of the coverage beam.
BRIEF DESCRIPTION OF THE DRAWINGS
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations. Thus, the particular choice of drawings is not intended to limit the scope of the present disclosure.
FIG. 1 is a perspective view of the cockpit and flight deck of an exemplary aircraft, showing one possible placement of the pilot communication system (phased array communication system);
FIGS. 2A-2F are plan views of some different speaker-microphone placement embodiments usable with the pilot communication system;
FIG. 3 is diagram illustrating how an inserted time delay is used to direct speaker and microphone beam patterns in the pilot communication system;
FIGS. 4A and 4B are graphs showing, respectively, the azimuth and elevation views of a single acoustic transducer (not phased array);
FIGS. 5A and 5B are graphs showing, respectively, the azimuth and elevation views of plural acoustic transducers in a phased array configuration;
FIG. 6 is a circuit diagram of a single frequency band, signal processing circuit;
FIG. 7 is a circuit diagram of a multiple frequency band, signal processing circuit; and
FIG. 8 is a circuit diagram illustrating the signal processing circuit interface to the microphone array.
DETAILED DESCRIPTION
Referring to FIG. 1, an exemplary aircraft cockpit and flight deck 10 is illustrated. As illustrated, the pilot communication (phased array communication system) 12 may be located at a suitable position forward of the pilot and copilot seating positions, preferably so that the speaker array 14 and microphone array 16 are within line of sight of the pilot's and copilot's heads. In FIG. 1 the pilot head positions are illustrated at 20. The speaker array 14 defines an acoustic beam 22, in this example directed at one pilot and the microphone array 16 defines an acoustic beam microphone pickup pattern 24, in this example directed at the other pilot. The acoustic beams 22 and 24 are steerable by the signal processing circuit 18, and thus the speaker and microphone beam patterns can be aimed at either (or both) pilot and copilot. Also, while one acoustic beam pattern has been depicted for each of the speaker and microphone arrays, it is possible to control the arrays to define multiple beam patterns from each, by suitably driving these arrays by the signal processing circuit, such as by rapidly switching between different beam directions at a rate that is not discernable to the human ear.
The pilot communication system 12 generally comprises a plurality of speakers and a plurality of microphones (collectively referred to herein as acoustic “transducers”) each arranged in a predefined spaced apart layout. In the preferred embodiment these speakers and microphones are coupled to a signal processing circuit 18 that supplies phase-controlled audio signals to the speakers and receives phase-controlled audio signals from the microphones. The signal processing circuit 18 may be implemented using a field programmable gate array (FPGA), microprocessor, a microcontroller, a digital signal processor (DSP), or a combination thereof.
As will be discussed more fully below, the plurality of speakers and plurality of microphones each operate, as a phased array system that produces a beam pattern dictated by the locations of the transducers, and further dictated by the signal time delays to each transducer as governed by the signal processing circuit 18. The pilot communication system may be coupled to the avionics communication system 11, which provides communication with air traffic control (ATC) and also provides signal routing to allow the pilots to communicate with one another and with flight attendants, and to broadcast messages to the passengers.
A typical embodiment of the pilot communication system will include both a speaker array 14, comprising plural speakers, and a microphone array 16, comprising plural microphones. To achieve the desired steerable beam pattern results for both speakers and microphones, the speakers and microphones (collectively “transducers”) are physically arranged in a predetermined configuration pattern—a property that bears upon the amount of delay introduced by the signal processing circuit 18.
Several different spaced-apart transducer configuration patterns have been illustrated in FIGS. 2A through 2F. Other configuration patterns are also possible. These configurations patterns may be used singly or in combination. The transducers 26 (speakers or microphones, as noted above) are each arranged according to a predefined configuration pattern, such as but not limited to, those examples shown in FIGS. 2A-2F. Note that it is not necessary for both speakers and microphones be arranged according to the same configuration pattern. Thus, for example, the speakers could be arranged in a spiral configuration pattern, while the microphones are arranged in a square configuration pattern.
FIG. 2A illustrates a linear array, where the transducers are centered along a common straight line. The transducers are spaced apart according to a predefined spacing pattern (e.g., equally spaced, logarithmically spaced, or with other spacing) and each is preferably fed by a dedicated audio signal transmission line coupled to the signal processing circuit. In this way, the signal processing circuit is able to send or receive a precisely timed audio signal to each transducer. The effect of such precisely timed audio signals is to produce an array beam pattern.
In the case of the linear array shown in FIG. 2A, the beam pattern can be controlled by the signal processing circuit to provide one-dimensional beam steering. Thus, the linear array is well suited for deployment in flight deck locations where the linear array lies generally in a plane the includes the pilot's and copilot's heads.
FIG. 2B illustrates a curved or curvilinear array, where the transducers are centered along a common curved line. As with the linear array of FIG. 2A, the transducers may be equally spaced apart, or spaced apart with bilateral symmetry. The curvilinear array is well suited for deployment in portions of the cockpit having naturally curved surfaces, as dictated by the shape of the fuselage. As with the linear array, the transducers of the curvilinear array may be fed by a dedicated audio signal transmission line coupled to the signal processing circuit.
FIG. 2C illustrates a circular array. The transducers are preferably equally spaced, each fed by a dedicated audio signal transmission line coupled to the signal processing circuit. The circular array is well suited for placement on the ceiling of the cockpit, such as above and forward of the pilots heads. The circular array can provide two-dimensional beam steering.
FIG. 2D illustrates a concentric circular array. The transducers are preferably equally spaced around each concentric circle, each fed by a dedicated audio signal transmission line coupled to the signal processing circuit. The concentric circular array is useful where the audio signals being handled by the signal processing circuit are subdivided into different frequency bands. In this regard, lower frequencies have longer wavelengths; conversely higher frequencies have shorter wavelengths. Thus, the concentric circles are sized so that the spacing of the transducers are farther apart on the larger circles (better adapted to collectively reproduce or capture lower frequencies), and closer together on the smaller circles (better adapted to collectively reproduce or capture higher frequencies).
FIG. 2E illustrates a spiral array where the transducers 26 are spaced according to a logarithmic pattern. The transducers are each fed by a dedicated audio signal transmission line coupled to the signal processing circuit. Like the concentric circular array, the spiral array provides different transducer spacings. This is a natural consequence of following a spiral pattern; however, in addition, the individual transducers are spaced apart based on a logarithmic relationship. The spiral array is thus well adapted to provide range of different transducer spacings to correspond with the different frequency bands mediated by the signal processing circuit. In this regard, the spiral array can provide more broadband frequency response, because the signal processing circuit has more transducer spacing options to work with when sending or receiving audio to selected transducers. Like the circular arrays, the spiral array is able to provide two-dimensional beam steering.
FIG. 2F illustrates a square or rectilinear array. The transducers are each fed by a dedicated audio signal transmission line coupled to the signal processing circuit. The square array is similar to the circular array, providing two-dimensional beam steering. The square array is suited in applications where packaging constraints dictate.
While the transducer spacings shown in FIG. 2A-2D are uniform, in practice, the spacing between the transducers could be constant, linear, logarithmic or based on other factors.
Active Transducer Option
In all of the illustrated transducer configuration pattern embodiments, if desired, the transducers may be implemented as addressable active devices, each having a unique address. In such embodiment all transducers may be coupled to the signal processing circuit via a control bus that supplies a data signal carrying the audio information and that carries an address signal used to notify which transducer shall act upon the supplied data signal.
To better understand how beam steering is accomplished, refer to FIG. 3, which illustrates two transducers (e.g., speakers) arranged in a linear array. When both speakers are fed coherently with the same sinusoidal audio signal, the sound waves emanating from each speaker are in phase and the sound will appear to come straight on from the plane of the speakers, i.e., from a direction perpendicular to the horizontal axis (as seen in FIG. 3).
However, when one of the speakers is fed by a signal that is delayed by a time increment dt, constructive and destructive interference between the respective wavefronts of the two speakers will produce the loudest collective sound in an angled direction, no longer perpendicular but at an angle θ to the horizontal axis, as shown in FIG. 3. The angled direction can be computed trigonometrically, by knowing the wavelength of the audio frequency. Frequency (f) and wavelength (λ) are related the speed of sound (c), according to the following equation:
f=c/λ
To steer the beam in the direction (angle θ) illustrated in FIG. 3, the signal from speaker on the left (in FIG. 3) is delayed by a time dt computed to account for the fact that the signal from speaker on the left (in FIG. 3) must traverse the additional distance d, in order for its wavefront to be in phase with the wavefront from the speaker on the right (in FIG. 3). This delay dt can be computed for a given angle θ using the following trigonometric relationship:
Delay dt=s sin(θ)/c
where s is the speaker separation and c is the speed of sound at the ambient temperature.
Comment about Transducer Spacing for Optimum Performance
When designing the spacing between transducers, it is recommended to choose a spacing that avoids formation of strong grating lobes or side lobes. Grating lobes are a consequence of having large and uniform distances between the individual transducer elements in relation to the acoustic wavelength. Therefore, preferably small spacing (relative to acoustic wavelength) should be chosen, so that grating lobes are minimized.
In addition, when designing a phased array communication system for the pilot communication system, it can be beneficial to implement the spaced-apart speakers and the spaced-apart microphones in a common package, so that the speakers and microphones are closely spaced to one another in relation to the acoustic wavelength. This can help minimize acoustic echo, a form of circular feedback where sounds from the speakers are picked up by the microphones and rebroadcast by the speakers. Active electronic acoustic echo cancellation processing can also be included in the signal processing circuit to reduce acoustic echo. Acoustic echo is also reduced because the speakers and microphones operate with steerable beam patterns (beam forming), so that sounds broadcast by the speaker array and sounds picked up by the microphone array can each be focused on different regions of space, thus eliminating conditions for a feedback loop.
Focused, Electronically Steerable Beam Pattern
The phased array technique produces a beam pattern that can concentrate the acoustic signal into a much smaller region of space than is normally produced by a single transducer. Moreover, a phased array of transducers can focus the acoustic signal on a particular region of space by electronically adjusting the phase shift applied to each transducer.
For comparison, FIGS. 4A and 4B show an exemplary sound pressure pattern for an individual transducer operated at a frequency in the nominal range of the human voiced speech, e.g., 500 Hz. Note the pattern is essentially the same in azimuth and elevation and has a wide dispersal pattern. Because this pattern is generated by a single transducer, there is no phased-array beam pattern. The pattern is largely hemispherical or cardioid in shape, with little gain in any direction. At high frequency when the transducer dimension is larger than the acoustic wavelength, single acoustic transducer does exhibit a non-uniform directivity pattern. However, the directivity is not as focused and adjustable as a phased array system.
FIGS. 5A and 5B show the comparable sound pressure pattern for an array of transducers operated at the same frequency as FIGS. 4A an 4B. Note the pattern in both azimuth and elevation is significantly focused. The same amount of energy that produced the pattern of FIGS. 4A and 4B is confined to a much narrower beam pattern in FIGS. 5A and 5B, thus producing a much higher sound pressure level along the beam's axis (higher gain) and lower sound energy in other directions. By using a narrower beam pattern that is steerable, the disclosed communication system is able to minimize the sound energy transmitted into the passenger cabin.
FIG. 6 illustrates an exemplary signal processing circuit 18 that may be used to process the audio content broadcast by the speaker transducers. This embodiment treats the audio content as a single band of frequencies. It is thus well suited for speaker arrays arranged as a linear array (FIG. 2A) and may also be effectively used with the curvilinear array (FIG. 2B) and the circular array (FIG. 2C). The audio source 30 can supply audio from a number of sources, including the avionics communication system 11 (FIG. 1), which typically includes a communication radio (e.g., for air traffic control), a sound mixing and routing system used for inter-cabin and cockpit communication (e.g., between pilot and copilot, or with flight attendants and passengers), and audio from the alerting system.
For purposes of illustration, it is assumed that the audio source 30 carries analog audio signals, which are then digitized by the analog to digital (ADC) circuit 32. For illustration purposes an ADC having 16-bit resolution is depicted. Other resolution bit depths are also possible. The ADC supplies the digitized audio signals to a field programmable gate array (FPGA) 34, which is configured by programming to define n-signal paths, to accommodate n-copies of the digital audio stream, where n is the number of speakers implemented in the array. Thus, if an eight-speaker array is implemented, the FPGA will have eight digital audio streams (channels). To each audio stream the FPGA applies a calculated delay time (which could be a null or zero-delay time), collectively designed to steer the collective beam emanating from the speaker array in a particular direction. Details of these delay calculations are discussed below. As noted above, the functions performed by the disclosed FPGA can also be performed using one or more microprocessors, microcontrollers, digital signal processors (DSP), application specific integrated circuits, and/or combinations thereof.
Once a delay time (which could be a zero-delay time) has been applied to each audio stream individually, the FPGA outputs the digital audio streams to a multi-channel digital to analog convertor (DAC) 36, which converts each digital audio stream into an analog signal. In the illustrated embodiment the multi-channel DAC provides 16-bit resolution to match the resolution of the ADC. Other resolution bit depths are also possible. The multi-channel DAC provides a number of independent channels sufficient to individually process each of the digital audio streams.
Once converted to analog signals, the audio streams are processed by a bank of low pass filters 38. The low pass filter bank 38 includes one low pass filter dedicated to each of the analog audio streams (i.e., one for each speaker in the array). In the illustrated embodiment each filter provides a 3 db roll-off at 100 kHz. The filter allows the audio signals within the human hearing range to pass without attenuation, but blocks frequencies well above that range, to prevent digital clock noise and other spurious signals from being delivered to the amplifier stage 40. Other filter roll-off frequencies and filter slopes are also possible. The amplifier stage 40 provides one channel of amplification for each of the audio signals. Each amplifier provides low distortion signal gain and impedance matching to the speaker 14, so that suitable listening levels within the cockpit can be achieved. If desired, portions or all of the multi-channel components downstream of the FPGA 34 can be bundled or packaged with each speaker, thus allowing digital audio to be distributed to the speaker array. In such an embodiment a sync signal is used to load all of the DACs at the exact same time. Such sync signal is in addition to the digital audio signals provided by existing digital audio standards.
Because the system calculates steering to an actual point in space, rather than just having the array go to an angle, the time delays calculated by the signal processing circuit can be based solely on the distance from each speaker (or microphone) to the defined steer point and the speed of sound. In this regard, frequency subdivision helps in the off-axis behavior. Having uniformly spaced speakers/microphones transmitting the same signal will produce different constructive/deconstructive interference patterns as a function of frequency. The higher the frequency, the narrower the primary node becomes, at the expense of significantly more adverse anti-nodes. This means that small changes in position generate large amplitude variation. This can be seen as a major annoyance and distraction for the flight crews. The frequency subdivision technique uses the difference in speaker/microphone position, coupled with the relative filtering, to maintain the same beam width at the steering point across the frequency range. This will, to some extent, reduce the SNR/gain of the overall array in order to preserve off-axis behavior without compromising any sound bleed back to the cabin. Thus, if the pilots move their heads away from where the steer point is, they won't perceive as drastic an amplitude variance.
FIG. 7 illustrates another exemplary signal processing circuit 18 embodiment that may be used to process the audio content that is subdivided into frequency bands, such as high, mid and low frequency bands. This multi-band embodiment is well suited for speaker arrays that have been optimized in clusters, to favor different frequency ranges, such as the concentric array (FIG. 2D) and spiral array (FIG. 2E). In this embodiment the audio source 30 is split on the basis of frequency by crossover network 50 into plural frequency bands. For illustration, three frequency bands or channels have been illustrated (high channel 52, mid channel 54, low channel 56). A greater or fewer number of bands may be implemented as required by the application.
Subdividing the audio spectrum into different frequency ranges or bands provides better control over how sounds at different frequencies may be delivered at a particular position in space with pinpoint accuracy. The reason for this is that the time delays needed to steer an acoustic beam to a particular point in space are frequency dependent (wavelength dependent). In the calculations discussed below, one finds that the wavelength of the acoustic wave plays a key role in the time delay calculation, and wavelength is inversely related to frequency as a matter of fundamental physics.
Thus, by subdividing the range of usable frequencies into different bands, it becomes possible for the FPGA 34 calculate appropriate time delays for each band or range of frequencies. By producing greater precision in focusing the acoustic energy, the multi-band embodiment is better able to deliver the full spectrum of broadcast sound directly to the pilot's ears. This accuracy also helps improve intelligibility because all frequency content required for good speech intelligibility is delivered without phase error. In this regard, the vowel sounds in human speech tend to be lower in the speech frequency range, while the consonant sounds tend to be higher in the speech frequency range. Speech signals are more intelligible if these respective vowel and consonant sounds arrive properly synchronized when they arrive at the human ear. Having highly accurate speech reproduction can be quite important in the aircraft cockpit to overcome the masking effect caused by the high ambient noise levels during flight.
The high, mid and low channels 52-56 are converted simultaneously into digital audio streams by the simultaneous multi-channel ADC 58. It is important that these channels are simultaneously digitized so that the digitization process does not introduce any time discrepancies among the multiple channels. The reason for this was discussed above—to ensure phase coherence among the frequency bands so that beam focus is accurate and speech intelligibility is not degraded.
The multiple bands (in this case high, mid, low) of digital audio are then processed by FPGA 34, essentially as discussed above, to produce individual time delayed audio streams for each of the speakers 14 in the array. Thus, the post processing stages following the FPGA 34 include the DAC 36, the low pass filter bank 38, and the amplifier stage 40, which function essentially as discussed above.
When the concentric circular array (FIG. 2D) or spiral array (FIG. 2E) are employed, additional control over the sound delivery becomes possible. While all of the speakers in the concentric circular or spiral arrays can be driven as the linear array (without regard to frequency band) the concentric circular and spiral arrays are capable of better performance by feeding them with frequency in mind. For example, the speakers in the outermost orbit of the concentric circular array may be fed by lower frequencies while the speakers closer to the center of the array are fed by higher frequencies. In this way the array can have greater control over not only sound placement (where in space the signal is heard), but also sound fidelity (the richness of tonality heard at the placement location). Such frequency equalization (EQ) can also apply to the linear array. For example, the outer speakers in the linear array can be low-pass filtered, while the center ones can be high-pass filtered.
Individual speakers work by moving a mass of air through the pumping action of an electromagnetically driven piston or other movement producing device that is coupled to a speaker cone having a predefined surface area. Moving low frequencies (long wavelengths) requires movement of more air mass than is required for higher frequencies. This is why conventional bass frequency speakers usually have larger speaker cones than high frequency speakers.
A practical embodiment of a pilot communication system for an aircraft cockpit usually dictates use of smaller-sized speaker cones, because space is limited. While it is possible to implement a system using different sizes of speaker cones, such may not be practical or necessary to achieve good fidelity. One big advantage the system gets from the spatial separation with low frequencies is tied to the wavelength. Longer wavelengths need more separation between speakers to achieve directionality, which is important. However, as frequency increases, the larger spatial separation between speakers causes the increased peak/lull sideband behavior. This is where the crossover filtering comes into play and is why the inner speakers handle the higher frequencies. Thus, the relative distance between a grouping of speakers defines the wavelength (and conversely frequencies) at which they achieve acceptable directionality and sound field behavior.
Referring to FIG. 8, the processing of signals from the microphone array 16 is handled in a similar fashion. The individual microphones 62 are each fed to dedicated channels of microphone amplifier stage 64 and then fed to the simultaneous multi-channel ADC 66 which converts the microphone signals to digital signals. These digital signals are then input to the FPGA 34.
The FPGA controls the sample time of each microphone input relative to other microphone inputs, thereby electronically controlling the directivity of the overall sound received by the array of microphones. The received audio signal from the groups of microphones can then be digitally filtered, processed, and combined to create a customized highly directional received signal for each pilot while minimizing the noise from other sources or directions.
Calculation of Time Delays
The FPGA 34 calculates the applicable time delays using the same approach for the linear arrays, circular and spiral arrays—except that linear array delays are only computed in two (x,y) dimensions. The FPGA is supplied with the (x,y,z) coordinates of each speaker and microphone in space. Then within in the same (x,y,z) coordinate reference frame the FPGA is supplied with, or calculates, the steer point for both the microphone and speaker arrays. In a typical embodiment, the steer point for microphones and speakers would likely be in the direction of the pilot, but they could be different depending on the particular application. The FPGA then determines the distance from each speaker/microphone to the steer points and divides that distance by the speed of sound. This gives the time the sound waves will take to traverse the distance:
d t = x d i s t 2 + y d i s t 2 + z d i s t 2 speed of sound
The difference between the various travel times then amounts to the time delays (or time advances) applied to each signal by the FPGA. In one embodiment fixed location(s) may be used as the steer point (e.g., the nominal fixed locations of the pilots' heads. In another embodiment, the steer point is dynamically computed using image tracking. For example, image tracking may be performed by using optical or LiDAR sensing and recognition of the pilot's faces, heads, mouth, ears or the like. Other technology sensing technology may also be used.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment as contemplated herein. It should be understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.

Claims (20)

What is claimed is:
1. In an aircraft cockpit that defines an acoustic space with at least one pilot seating location disposed therein and that includes an avionics communication system, a pilot communication system comprising:
a transducer array comprising a plurality of individual acoustic transducers disposed in a spaced relation to one another and combined for deployment within the cockpit;
each of the plurality of acoustic transducers converting sound information between sound information expressed as an electrical signal and sound information expressed as an acoustic wave;
a signal processing circuit having an input port that receives sound information and an output port that supplies sound information after being processed by the signal processing circuit;
the input port being configured for coupling to one of: the transducer array and the avionics communication system;
the output port being configured to couple to the other of: the transducer array and the avionics communication system;
the signal processing circuit being coupled to the transducer array to electrically interface with each of the plurality of transducers individually;
the signal processing circuit selectively inserting a set of time delays associated with at least some of the plurality of individual transducers to form a coverage beam within the acoustic space of the cockpit;
the signal processing circuit selectively controlling the time delays associated with the at least some of the plurality of individual transducers to steer the coverage beam in the direction of the pilot seating location.
2. The pilot communication system of claim 1 wherein the transducer array comprises a plurality of individual speakers.
3. The pilot communication system of claim 1 wherein the transducer array comprises a plurality of individual microphones.
4. The pilot communication system of claim 1 wherein the input port that receives sound information in real time.
5. The pilot communication system of claim 1 wherein the input port and output port of the signal processing circuit both communicate electrical signals carrying sound information.
6. The pilot communication system of claim 1 wherein the port configured to couple to the transducer array communicates electrical signals in parallel to each of the plurality of individual acoustic transducers.
7. The pilot communication system of claim 1 wherein the signal processing system subdivides the sound information into a plurality of different frequency bands that are each separately and differently processed in selectively controlling the time delays associated with the at least some of the plurality of individual transducers.
8. The pilot communication system of claim 1 wherein the signal processing circuit focuses and steers the directivity of the coverage beam.
9. The pilot communication system of claim 1 wherein the signal processing circuit dynamically focuses and steers the directivity of the coverage beam by tracking the position of a pilot.
10. The pilot communication system of claim 1 wherein the signal processing circuit focuses the directivity of the coverage beam to reduce the audibility of flight deck alerts and warnings reaching a passenger cabin area within the aircraft.
11. The pilot communication system of claim 1 wherein the transducers are disposed in a substantially linear array.
12. The pilot communication system of claim 1 wherein the transducers are disposed in a substantially circular array.
13. The pilot communication system of claim 1 wherein the transducers are disposed in a spiral array.
14. The pilot communication system of claim 1 wherein the transducers are disposed in a locus of substantially concentric circles.
15. In an aircraft cockpit that defines an acoustic space with at least one pilot seating location disposed therein and that includes an avionics communication system, a pilot communication system comprising:
a microphone array comprising a plurality of individual microphones disposed in a spaced relation to one another and combined for deployment within the cockpit;
each of the plurality of microphones converting sound information expressed as an acoustic wave into sound information expressed as an electrical signal;
a speaker array comprising a plurality of individual speakers disposed in a spaced relation to one another and combined for deployment within the cockpit;
each of the plurality of speakers converting sound information expressed as an electrical signal into sound information expressed as an acoustic wave;
a signal processing system having a first input port that receives sound information from the avionics communication system a second input port that receives sound information from the microphone array, the signal processing circuit being coupled to the microphone array to electrically interface with each of the plurality of microphones individually, the signal processing circuit selectively inserting a first set of time delays associated with at least some of the plurality of individual microphones to form a first coverage beam within the acoustic space of the cockpit;
the signal processing circuit having a first output port that supplies sound information to the avionics communication system and a second output port that supplies sound information to the speaker array, the signal processing circuit being coupled to the speaker array to electrically interface with each of the plurality of speakers individually, the signal processing circuit selectively inserting a second set of time delays associated with at least some of the plurality of individual speakers to form a second coverage beam within the acoustic space of the cockpit.
16. The pilot communication system of claim 15 further comprising:
the signal processing circuit selectively controlling the first and second time delays associated with the at least some of the plurality of individual transducers to steer the first and second coverage beams in towards different locations within the acoustic space of the cockpit.
17. The pilot communication system of claim 15 further comprising:
the signal processing circuit selectively controlling the first and second time delays associated with the at least some of the plurality of individual transducers to steer the first coverage beam in the direction of a first pilot seating location and to steer the second coverage beam in the direction of a second pilot seating location.
18. The pilot communication system of claim 15 wherein the microphones are disposed in a spaced apart array of spatial configuration selected from the group consisting of: linear, curvilinear, circular, concentric circular, square and spiral and combinations thereof.
19. The pilot communication system of claim 15 wherein the speakers are disposed in a spaced apart array of spatial configuration selected from the group consisting of: linear, curvilinear, circular, concentric circular, square and spiral and combinations thereof.
20. The pilot communication system of claim 15 wherein the plurality of microphones and the plurality of speakers are disposed in spatial configurations that differ from one another.
US15/929,383 2020-04-29 2020-04-29 Phased array speaker and microphone system for cockpit communication Active 2040-05-20 US11170752B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/929,383 US11170752B1 (en) 2020-04-29 2020-04-29 Phased array speaker and microphone system for cockpit communication
EP21170824.3A EP3905715A1 (en) 2020-04-29 2021-04-28 Phased array speaker and microphone system for cockpit communication
CN202110470482.7A CN113573210B (en) 2020-04-29 2021-04-29 Phased array speaker and microphone system for cockpit communications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/929,383 US11170752B1 (en) 2020-04-29 2020-04-29 Phased array speaker and microphone system for cockpit communication

Publications (2)

Publication Number Publication Date
US20210343267A1 US20210343267A1 (en) 2021-11-04
US11170752B1 true US11170752B1 (en) 2021-11-09

Family

ID=75728642

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/929,383 Active 2040-05-20 US11170752B1 (en) 2020-04-29 2020-04-29 Phased array speaker and microphone system for cockpit communication

Country Status (3)

Country Link
US (1) US11170752B1 (en)
EP (1) EP3905715A1 (en)
CN (1) CN113573210B (en)

Citations (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4819269A (en) * 1987-07-21 1989-04-04 Hughes Aircraft Company Extended imaging split mode loudspeaker system
US4866776A (en) * 1983-11-16 1989-09-12 Nissan Motor Company Limited Audio speaker system for automotive vehicle
US5838284A (en) 1996-05-17 1998-11-17 The Boeing Company Spiral-shaped array for broadband imaging
US20020090093A1 (en) * 2001-01-09 2002-07-11 Michael Fabry Vehicle electroacoustical transducing
US20030063756A1 (en) * 2001-09-28 2003-04-03 Johnson Controls Technology Company Vehicle communication system
US20030142835A1 (en) * 2002-01-31 2003-07-31 Takeshi Enya Sound output apparatus for an automotive vehicle
US20040066940A1 (en) * 2002-10-03 2004-04-08 Silentium Ltd. Method and system for inhibiting noise produced by one or more sources of undesired sound from pickup by a speech recognition unit
US20040240676A1 (en) * 2003-05-26 2004-12-02 Hiroyuki Hashimoto Sound field measurement device
US20050213786A1 (en) * 2004-01-13 2005-09-29 Cabasse Acoustic system for vehicle and corresponding device
US7099483B2 (en) * 2003-02-24 2006-08-29 Alps Electric Co., Ltd. Sound control system, sound control device, electronic device, and method for controlling sound
US20060262943A1 (en) * 2005-04-29 2006-11-23 Oxford William V Forming beams with nulls directed at noise sources
US20060269074A1 (en) * 2004-10-15 2006-11-30 Oxford William V Updating modeling information based on offline calibration experiments
US20070025562A1 (en) * 2003-08-27 2007-02-01 Sony Computer Entertainment Inc. Methods and apparatus for targeted sound detection
US20070127736A1 (en) 2003-06-30 2007-06-07 Markus Christoph Handsfree system for use in a vehicle
US20070135061A1 (en) * 2005-07-28 2007-06-14 Markus Buck Vehicle communication system
US20070211574A1 (en) * 2003-10-08 2007-09-13 Croft James J Iii Parametric Loudspeaker System And Method For Enabling Isolated Listening To Audio Material
US20080187156A1 (en) * 2006-09-22 2008-08-07 Sony Corporation Sound reproducing system and sound reproducing method
US20080212788A1 (en) * 2005-05-26 2008-09-04 Bang & Olufsen A/S Recording, Synthesis And Reproduction Of Sound Fields In An Enclosure
US20100104110A1 (en) * 2007-12-14 2010-04-29 Panasonic Corporation Noise reduction device and noise reduction system
US20100128880A1 (en) * 2008-11-20 2010-05-27 Leander Scholz Audio system
US7760889B2 (en) * 2004-08-10 2010-07-20 Volkswagen Ag Speech support system for a vehicle
US20100226507A1 (en) * 2009-03-03 2010-09-09 Funai Electric Co., Ltd. Microphone Unit
US20120093338A1 (en) * 2010-10-18 2012-04-19 Avaya Inc. System and method for spatial noise suppression based on phase information
US8190438B1 (en) * 2009-10-14 2012-05-29 Google Inc. Targeted audio in multi-dimensional space
US20120288124A1 (en) * 2011-05-09 2012-11-15 Dts, Inc. Room characterization and correction for multi-channel audio
US20130039506A1 (en) * 2011-08-11 2013-02-14 Sony Corporation Headphone device
US8422693B1 (en) * 2003-09-29 2013-04-16 Hrl Laboratories, Llc Geo-coded spatialized audio in vehicles
US20130142353A1 (en) * 2010-07-30 2013-06-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Vehicle with Sound Wave Reflector
US8615392B1 (en) * 2009-12-02 2013-12-24 Audience, Inc. Systems and methods for producing an acoustic field having a target spatial pattern
US20140056431A1 (en) * 2011-12-27 2014-02-27 Panasonic Corporation Sound field control apparatus and sound field control method
US20140112496A1 (en) * 2012-10-19 2014-04-24 Carlo Murgia Microphone placement for noise cancellation in vehicles
US20140136981A1 (en) * 2012-11-14 2014-05-15 Qualcomm Incorporated Methods and apparatuses for providing tangible control of sound
US20140294210A1 (en) * 2011-12-29 2014-10-02 Jennifer Healey Systems, methods, and apparatus for directing sound in a vehicle
US20140294198A1 (en) * 2013-03-27 2014-10-02 Pinnacle Peak Holding Corporation d/b/a Setcom Corporation Feedback cancellation for vehicle communications system
US20150110285A1 (en) * 2013-10-21 2015-04-23 Harman International Industries, Inc. Modifying an audio panorama to indicate the presence of danger or other events of interest
US20150127351A1 (en) * 2012-06-10 2015-05-07 Nuance Communications, Inc. Noise Dependent Signal Processing For In-Car Communication Systems With Multiple Acoustic Zones
US20150124988A1 (en) * 2013-11-07 2015-05-07 Continental Automotive Systems,Inc. Cotalker nulling based on multi super directional beamformer
US20150127338A1 (en) * 2013-11-07 2015-05-07 Continental Automotive Systems, Inc. Co-talker nulling for automatic speech recognition systems
US20150210214A1 (en) * 2012-08-30 2015-07-30 Volvo Truck Corporation Presentation of an audible message in a vehicle
US20160027428A1 (en) * 2014-07-15 2016-01-28 Hassan Faqir Gul Noise cancellation system
US20160029124A1 (en) * 2014-07-25 2016-01-28 2236008 Ontario Inc. System and method for mitigating audio feedback
US9352701B2 (en) * 2014-03-06 2016-05-31 Bose Corporation Managing telephony and entertainment audio in a vehicle audio platform
US20160174010A1 (en) * 2014-12-12 2016-06-16 Qualcomm Incorporated Enhanced auditory experience in shared acoustic space
US20160183025A1 (en) * 2014-12-22 2016-06-23 2236008 Ontario Inc. System and method for speech reinforcement
US20160219368A1 (en) * 2013-09-26 2016-07-28 Bang & Olufsen A/S A loudspeaker transducer arrangement
US20160323668A1 (en) 2015-04-30 2016-11-03 Shure Acquisition Holdings, Inc. Array microphone system and method of assembling the same
US20160379618A1 (en) * 2015-06-25 2016-12-29 Bose Corporation Arraying speakers for a uniform driver field
US20170011753A1 (en) * 2014-02-27 2017-01-12 Nuance Communications, Inc. Methods And Apparatus For Adaptive Gain Control In A Communication System
US20170142507A1 (en) * 2015-11-17 2017-05-18 Chung Yuan Christian University Electronic helmet and method thereof for cancelling noises
US20170150254A1 (en) * 2015-11-19 2017-05-25 Vocalzoom Systems Ltd. System, device, and method of sound isolation and signal enhancement
US20170213541A1 (en) * 2016-01-25 2017-07-27 Ford Global Technologies, Llc System and method for personalized sound isolation in vehicle audio zones
US20170267138A1 (en) * 2016-03-17 2017-09-21 Bose Corporation Acoustic Output Through Headrest Wings
US20180025718A1 (en) * 2015-02-13 2018-01-25 Harman Becker Automotive Systems Gmbh Active noise and awareness control for a helmet
US9953641B2 (en) * 2015-10-27 2018-04-24 Panasonic Intellectual Property Management Co., Ltd. Speech collector in car cabin
US9966059B1 (en) * 2017-09-06 2018-05-08 Amazon Technologies, Inc. Reconfigurale fixed beam former using given microphone array
US20180167725A1 (en) 2016-12-08 2018-06-14 Infobank Corp. Apparatus and method for providing phone call in a vehicle
US10002478B2 (en) * 2014-12-12 2018-06-19 Qualcomm Incorporated Identification and authentication in a shared acoustic space
US10049686B1 (en) * 2017-02-13 2018-08-14 Bose Corporation Audio systems and method for perturbing signal compensation
US20180242081A1 (en) * 2017-02-17 2018-08-23 2236008 Ontario Inc. System and method for feedback control for in-car communications
US20180277089A1 (en) * 2017-03-21 2018-09-27 Ruag Schweiz Ag Active noise control system in an aircraft and method to reduce the noise in the aircraft
US20180374469A1 (en) * 2017-06-26 2018-12-27 Invictus Medical, Inc. Active Noise Control Microphone Array
US20190098408A1 (en) * 2017-09-26 2019-03-28 Bose Corporation Audio hub
US20190104360A1 (en) * 2017-10-03 2019-04-04 Bose Corporation Spatial double-talk detector
CN109862472A (en) 2019-02-21 2019-06-07 中科上声(苏州)电子有限公司 A kind of car privacy call method and system
US20190364359A1 (en) * 2018-05-24 2019-11-28 Nureva, Inc. Method, apparatus and computer-readable media to manage semi-constant (persistent) sound sources in microphone pickup/focus zones
US20190359127A1 (en) * 2018-05-22 2019-11-28 Blackberry Limited Vehicle communication systems and methods of operating vehicle communication systems
US20190369615A1 (en) * 2018-06-05 2019-12-05 Facultad Politecnica-Universidad Nacional del Este System and method for verifying the presence and vital signs of pilots in a cockpit of an airplane
US20200074978A1 (en) * 2017-03-07 2020-03-05 Sony Corporation Signal processing device and method, and program
US10652663B1 (en) * 2019-04-30 2020-05-12 Cisco Technology, Inc. Endpoint device using the precedence effect to improve echo cancellation performance
US20200194023A1 (en) * 2018-12-18 2020-06-18 Gm Cruise Holdings Llc Systems and methods for active noise cancellation for interior of autonomous vehicle
US20200204464A1 (en) * 2018-12-24 2020-06-25 Panasonic Avionics Corporation Secure wireless vehicle parameter streaming
US20200219493A1 (en) * 2019-01-07 2020-07-09 2236008 Ontario Inc. Voice control in a multi-talker and multimedia environment
US20200312344A1 (en) * 2019-03-28 2020-10-01 Bose Corporation Cancellation of vehicle active sound management signals for handsfree systems
US20200342846A1 (en) * 2017-12-20 2020-10-29 Harman International Industries, Incorporated Virtual test environment for active noise management systems
US20200404443A1 (en) * 2018-03-08 2020-12-24 Sony Corporation Electronic device, method and computer program
US20210067872A1 (en) * 2019-08-27 2021-03-04 Fujitsu Client Computing Limited Information Processing Apparatus And Computer-Readable Recording Medium
US20210067873A1 (en) * 2017-12-29 2021-03-04 Harman International Industries, Incorporated Acoustical in-cabin noise cancellation system for far-end telecommunications
US20210120338A1 (en) * 2019-10-18 2021-04-22 Faurecia Clarion Electronics Europe Method for processing a signal from an acoustic emission system of a vehicle and vehicle comprising this acoustic emission system
US20210204059A1 (en) * 2019-12-30 2021-07-01 Harman International Industries, Incorporated Voice ducking with spatial speech separation for vehicle audio system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106031195B (en) * 2014-02-06 2020-04-17 邦&奥夫森公司 Sound converter system for directivity control, speaker and method of using the same
US9809163B2 (en) * 2015-04-14 2017-11-07 Harman International Industries, Incorporation Techniques for transmitting an alert towards a target area
US10349199B2 (en) * 2017-04-28 2019-07-09 Bose Corporation Acoustic array systems
CN209627688U (en) * 2019-05-28 2019-11-12 安徽奥飞声学科技有限公司 A kind of earpiece and communication device with MEMS loudspeaker array

Patent Citations (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4866776A (en) * 1983-11-16 1989-09-12 Nissan Motor Company Limited Audio speaker system for automotive vehicle
US4819269A (en) * 1987-07-21 1989-04-04 Hughes Aircraft Company Extended imaging split mode loudspeaker system
US5838284A (en) 1996-05-17 1998-11-17 The Boeing Company Spiral-shaped array for broadband imaging
US20020090093A1 (en) * 2001-01-09 2002-07-11 Michael Fabry Vehicle electroacoustical transducing
US20030063756A1 (en) * 2001-09-28 2003-04-03 Johnson Controls Technology Company Vehicle communication system
US20030142835A1 (en) * 2002-01-31 2003-07-31 Takeshi Enya Sound output apparatus for an automotive vehicle
US20040066940A1 (en) * 2002-10-03 2004-04-08 Silentium Ltd. Method and system for inhibiting noise produced by one or more sources of undesired sound from pickup by a speech recognition unit
US7099483B2 (en) * 2003-02-24 2006-08-29 Alps Electric Co., Ltd. Sound control system, sound control device, electronic device, and method for controlling sound
US20040240676A1 (en) * 2003-05-26 2004-12-02 Hiroyuki Hashimoto Sound field measurement device
US20070127736A1 (en) 2003-06-30 2007-06-07 Markus Christoph Handsfree system for use in a vehicle
US20070025562A1 (en) * 2003-08-27 2007-02-01 Sony Computer Entertainment Inc. Methods and apparatus for targeted sound detection
US8422693B1 (en) * 2003-09-29 2013-04-16 Hrl Laboratories, Llc Geo-coded spatialized audio in vehicles
US20070211574A1 (en) * 2003-10-08 2007-09-13 Croft James J Iii Parametric Loudspeaker System And Method For Enabling Isolated Listening To Audio Material
US20050213786A1 (en) * 2004-01-13 2005-09-29 Cabasse Acoustic system for vehicle and corresponding device
US7760889B2 (en) * 2004-08-10 2010-07-20 Volkswagen Ag Speech support system for a vehicle
US20060269074A1 (en) * 2004-10-15 2006-11-30 Oxford William V Updating modeling information based on offline calibration experiments
US20060262943A1 (en) * 2005-04-29 2006-11-23 Oxford William V Forming beams with nulls directed at noise sources
US20080212788A1 (en) * 2005-05-26 2008-09-04 Bang & Olufsen A/S Recording, Synthesis And Reproduction Of Sound Fields In An Enclosure
US20070135061A1 (en) * 2005-07-28 2007-06-14 Markus Buck Vehicle communication system
US20080187156A1 (en) * 2006-09-22 2008-08-07 Sony Corporation Sound reproducing system and sound reproducing method
US20100104110A1 (en) * 2007-12-14 2010-04-29 Panasonic Corporation Noise reduction device and noise reduction system
US20100128880A1 (en) * 2008-11-20 2010-05-27 Leander Scholz Audio system
US20100226507A1 (en) * 2009-03-03 2010-09-09 Funai Electric Co., Ltd. Microphone Unit
US8190438B1 (en) * 2009-10-14 2012-05-29 Google Inc. Targeted audio in multi-dimensional space
US8615392B1 (en) * 2009-12-02 2013-12-24 Audience, Inc. Systems and methods for producing an acoustic field having a target spatial pattern
US20130142353A1 (en) * 2010-07-30 2013-06-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Vehicle with Sound Wave Reflector
US20120093338A1 (en) * 2010-10-18 2012-04-19 Avaya Inc. System and method for spatial noise suppression based on phase information
US20120288124A1 (en) * 2011-05-09 2012-11-15 Dts, Inc. Room characterization and correction for multi-channel audio
US20130039506A1 (en) * 2011-08-11 2013-02-14 Sony Corporation Headphone device
US20140056431A1 (en) * 2011-12-27 2014-02-27 Panasonic Corporation Sound field control apparatus and sound field control method
US20140294210A1 (en) * 2011-12-29 2014-10-02 Jennifer Healey Systems, methods, and apparatus for directing sound in a vehicle
US20150127351A1 (en) * 2012-06-10 2015-05-07 Nuance Communications, Inc. Noise Dependent Signal Processing For In-Car Communication Systems With Multiple Acoustic Zones
US20150210214A1 (en) * 2012-08-30 2015-07-30 Volvo Truck Corporation Presentation of an audible message in a vehicle
US20140112496A1 (en) * 2012-10-19 2014-04-24 Carlo Murgia Microphone placement for noise cancellation in vehicles
US20140136981A1 (en) * 2012-11-14 2014-05-15 Qualcomm Incorporated Methods and apparatuses for providing tangible control of sound
US20140294198A1 (en) * 2013-03-27 2014-10-02 Pinnacle Peak Holding Corporation d/b/a Setcom Corporation Feedback cancellation for vehicle communications system
US20160219368A1 (en) * 2013-09-26 2016-07-28 Bang & Olufsen A/S A loudspeaker transducer arrangement
US20150110285A1 (en) * 2013-10-21 2015-04-23 Harman International Industries, Inc. Modifying an audio panorama to indicate the presence of danger or other events of interest
US20150127338A1 (en) * 2013-11-07 2015-05-07 Continental Automotive Systems, Inc. Co-talker nulling for automatic speech recognition systems
US20150124988A1 (en) * 2013-11-07 2015-05-07 Continental Automotive Systems,Inc. Cotalker nulling based on multi super directional beamformer
US20170011753A1 (en) * 2014-02-27 2017-01-12 Nuance Communications, Inc. Methods And Apparatus For Adaptive Gain Control In A Communication System
US9352701B2 (en) * 2014-03-06 2016-05-31 Bose Corporation Managing telephony and entertainment audio in a vehicle audio platform
US20160027428A1 (en) * 2014-07-15 2016-01-28 Hassan Faqir Gul Noise cancellation system
US20160029124A1 (en) * 2014-07-25 2016-01-28 2236008 Ontario Inc. System and method for mitigating audio feedback
US20160174010A1 (en) * 2014-12-12 2016-06-16 Qualcomm Incorporated Enhanced auditory experience in shared acoustic space
US10002478B2 (en) * 2014-12-12 2018-06-19 Qualcomm Incorporated Identification and authentication in a shared acoustic space
US20160183025A1 (en) * 2014-12-22 2016-06-23 2236008 Ontario Inc. System and method for speech reinforcement
US20180025718A1 (en) * 2015-02-13 2018-01-25 Harman Becker Automotive Systems Gmbh Active noise and awareness control for a helmet
US20160323668A1 (en) 2015-04-30 2016-11-03 Shure Acquisition Holdings, Inc. Array microphone system and method of assembling the same
US20160379618A1 (en) * 2015-06-25 2016-12-29 Bose Corporation Arraying speakers for a uniform driver field
US9953641B2 (en) * 2015-10-27 2018-04-24 Panasonic Intellectual Property Management Co., Ltd. Speech collector in car cabin
US20170142507A1 (en) * 2015-11-17 2017-05-18 Chung Yuan Christian University Electronic helmet and method thereof for cancelling noises
US20170150254A1 (en) * 2015-11-19 2017-05-25 Vocalzoom Systems Ltd. System, device, and method of sound isolation and signal enhancement
US20170213541A1 (en) * 2016-01-25 2017-07-27 Ford Global Technologies, Llc System and method for personalized sound isolation in vehicle audio zones
US20170267138A1 (en) * 2016-03-17 2017-09-21 Bose Corporation Acoustic Output Through Headrest Wings
US20180167725A1 (en) 2016-12-08 2018-06-14 Infobank Corp. Apparatus and method for providing phone call in a vehicle
US10049686B1 (en) * 2017-02-13 2018-08-14 Bose Corporation Audio systems and method for perturbing signal compensation
US20180242081A1 (en) * 2017-02-17 2018-08-23 2236008 Ontario Inc. System and method for feedback control for in-car communications
US20200074978A1 (en) * 2017-03-07 2020-03-05 Sony Corporation Signal processing device and method, and program
US20180277089A1 (en) * 2017-03-21 2018-09-27 Ruag Schweiz Ag Active noise control system in an aircraft and method to reduce the noise in the aircraft
US20180374469A1 (en) * 2017-06-26 2018-12-27 Invictus Medical, Inc. Active Noise Control Microphone Array
US9966059B1 (en) * 2017-09-06 2018-05-08 Amazon Technologies, Inc. Reconfigurale fixed beam former using given microphone array
US20190098408A1 (en) * 2017-09-26 2019-03-28 Bose Corporation Audio hub
US20190104360A1 (en) * 2017-10-03 2019-04-04 Bose Corporation Spatial double-talk detector
US20200342846A1 (en) * 2017-12-20 2020-10-29 Harman International Industries, Incorporated Virtual test environment for active noise management systems
US20210067873A1 (en) * 2017-12-29 2021-03-04 Harman International Industries, Incorporated Acoustical in-cabin noise cancellation system for far-end telecommunications
US20200404443A1 (en) * 2018-03-08 2020-12-24 Sony Corporation Electronic device, method and computer program
US20190359127A1 (en) * 2018-05-22 2019-11-28 Blackberry Limited Vehicle communication systems and methods of operating vehicle communication systems
US20190364359A1 (en) * 2018-05-24 2019-11-28 Nureva, Inc. Method, apparatus and computer-readable media to manage semi-constant (persistent) sound sources in microphone pickup/focus zones
US20190369615A1 (en) * 2018-06-05 2019-12-05 Facultad Politecnica-Universidad Nacional del Este System and method for verifying the presence and vital signs of pilots in a cockpit of an airplane
US20200194023A1 (en) * 2018-12-18 2020-06-18 Gm Cruise Holdings Llc Systems and methods for active noise cancellation for interior of autonomous vehicle
US20200204464A1 (en) * 2018-12-24 2020-06-25 Panasonic Avionics Corporation Secure wireless vehicle parameter streaming
US20200219493A1 (en) * 2019-01-07 2020-07-09 2236008 Ontario Inc. Voice control in a multi-talker and multimedia environment
CN109862472A (en) 2019-02-21 2019-06-07 中科上声(苏州)电子有限公司 A kind of car privacy call method and system
US20200312344A1 (en) * 2019-03-28 2020-10-01 Bose Corporation Cancellation of vehicle active sound management signals for handsfree systems
US10652663B1 (en) * 2019-04-30 2020-05-12 Cisco Technology, Inc. Endpoint device using the precedence effect to improve echo cancellation performance
US20210067872A1 (en) * 2019-08-27 2021-03-04 Fujitsu Client Computing Limited Information Processing Apparatus And Computer-Readable Recording Medium
US20210120338A1 (en) * 2019-10-18 2021-04-22 Faurecia Clarion Electronics Europe Method for processing a signal from an acoustic emission system of a vehicle and vehicle comprising this acoustic emission system
US20210204059A1 (en) * 2019-12-30 2021-07-01 Harman International Industries, Incorporated Voice ducking with spatial speech separation for vehicle audio system

Also Published As

Publication number Publication date
CN113573210B (en) 2022-08-30
CN113573210A (en) 2021-10-29
US20210343267A1 (en) 2021-11-04
EP3905715A1 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
EP0791279B1 (en) Loudspeaker system with controlled directional sensitivity
US8325941B2 (en) Method and apparatus to shape sound
US8170223B2 (en) Constant-beamwidth loudspeaker array
US20070274534A1 (en) Audio recording system
US9049534B2 (en) Directionally radiating sound in a vehicle
US11800280B2 (en) Steerable speaker array, system and method for the same
US20090161880A1 (en) Method and apparatus to create a sound field
US20080273722A1 (en) Directionally radiating sound in a vehicle
US8081775B2 (en) Loudspeaker apparatus for radiating acoustic waves in a hemisphere around the centre axis
Hafizovic et al. Design and implementation of a MEMS microphone array system for real-time speech acquisition
EP3993444A1 (en) Quiet flight deck communication using ultrasonic phased array
EP1718105A2 (en) Speaker array system
US11170752B1 (en) Phased array speaker and microphone system for cockpit communication
CN114598962A (en) Microphone array for determining a position and steering a transducer beam to the position on an aircraft
US11490195B2 (en) Loudspeaker enclosure and modulation method for a loudspeaker enclosure
Shi et al. Design of a constant beamwidth beamformer for the parametric array loudspeaker
US20230122420A1 (en) Directional array intercom for internal communication on aircraft
Guldenschuh et al. Evaluation of a transaural beamformer

Legal Events

Date Code Title Description
AS Assignment

Owner name: GULFSTREAM AEROSPACE CORPORATION, GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOHANAN, SCOTT;DECHELLIS, VINCENT;WANG, TONGAN;AND OTHERS;SIGNING DATES FROM 20200415 TO 20200427;REEL/FRAME:052528/0828

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE