WO2008070093A1 - Rear seat audio video systems and methods - Google Patents

Rear seat audio video systems and methods Download PDF

Info

Publication number
WO2008070093A1
WO2008070093A1 PCT/US2007/024864 US2007024864W WO2008070093A1 WO 2008070093 A1 WO2008070093 A1 WO 2008070093A1 US 2007024864 W US2007024864 W US 2007024864W WO 2008070093 A1 WO2008070093 A1 WO 2008070093A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
signal
interface
source
processing
Prior art date
Application number
PCT/US2007/024864
Other languages
French (fr)
Inventor
Douglas C. Campbell
Eric S. Deuel
Jack W. Monsma
Todd Sanders
Brian L. Douthitt
Marc A. Smeyers
Kevin J. Koetsier
Original Assignee
Johnson Controls Technology Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Johnson Controls Technology Company filed Critical Johnson Controls Technology Company
Publication of WO2008070093A1 publication Critical patent/WO2008070093A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • B60R11/0247Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for microphones or earphones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • B60R11/0264Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for control means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/0003Arrangements for holding or mounting articles, not otherwise provided for characterised by position inside the vehicle
    • B60R2011/0005Dashboard

Abstract

A system includes a processing system and an interface for receiving a source signal from an audio source. The system further includes a first interface for sending a first audio signal to a first audio output device; and a second interface for sending a second audio signal to a second audio output device. The processing system is configured to conduct a processing task on the source signal to create a processed signal and to send the processed signal to the first interface, the second interface, or to the first and second interfaces prior to the transmission of the first audio signal and the second audio signal from the first and second interfaces.

Description

REAR SEAT AUDIO VIDEO SYSTEMS AND METHODS
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
[0001] The present application claims the benefit of U.S. Provisional Patent Application No. 60/874,493 filed December 13, 2006, the entire disclosure of which is incorporated by reference herein. The present application also claims the benefit of U.S. Provisional Patent Application No. 60/874,492 filed December 13, 2006, the entire disclosure of which is incorporated by reference herein. The present application also claims the benefit of U.S. Provisional Patent Application No. 60/883,694 filed January 5, 2007, the entire disclosure of which is incorporated by reference herein. The present application also claims the benefit of U.S. Provisional Patent Application No. 60/872,791 filed December 5, 2006, the entire disclosure of which is incorporated by reference herein.
BACKGROUND
[0002] The present application relates generally to the field of in- vehicle entertainment, information, and education. The present application relates more specifically to in-vehicle audio and/or video systems.
[0003] The environment associated with a vehicle provides a number of challenges for providing entertainment systems. The vehicle environment is typically noisy and adjacent occupants might want to watch and/or listen to different media content. Accordingly, portable audio output devices such as wired headphones, wireless headphones, wired portable speakers, and/or wireless portable speakers may be used by vehicle occupants to listen to audio content in the vehicle. As multiple occupants may travel in a vehicle, it would be desirable for the occupants to be able to listen to a single media program in two different languages. Further, due to engine noise, road noise, and other environmental noise, concerns about hearing loss due to headphones, and the like, it is sometimes difficult to hear audio content without experiencing to elevated volume levels for an extended period of time. These challenges are often exacerbated when providing high quality and multi- featured electronics in the environment of a vehicle. It would be desirable to provide one or more processing features to a media signal prior to providing the signal to a portable audio output device. Further still, when multiple audio interfaces for portable audio output devices are provided in a vehicle, it would be desirable to provide standard audio output interfaces that are easily switched between different sources. [0004] There is a need for an in-vehicle entertainment system capable of providing multi- language audio output, simplified component construction, and/or preprocessing to source material. Further still, there is a need for a rear seat entertainment system capable of providing multi-language audio output, simplified component construction, and/or preprocessing to source material.
[0005] What is needed is a system and/or method that satisfies one or more of these needs or provides other advantageous features. Other features and advantages will be made apparent from the present specification. The teachings disclosed extend to those embodiments that fall within the scope of the claims, regardless of whether they accomplish one or more of the aforementioned needs.
SUMMARY
[0006] One embodiment relates to a system for mounting in a vehicle. The system includes a processing system and an interface for receiving a source signal from an audio source. The system further includes a first interface for sending a first audio signal to a first audio output device and a second interface for sending a second audio signal to a second audio output device. The processing system is configured to conduct a processing task on the source signal to create a processed signal and to send the processed signal to the first interface, the second interface, or to the first and second interfaces prior to the transmission of the first audio signal and the second audio signal from the first and second interfaces. [0007] Another embodiment relates to a system for mounting in a vehicle for providing a first audio program to a first audio interface and a second audio program to a second audio interface. The first audio program and the second audio program are related to a single video program and the first audio program, the second audio program, and the video program are stored on a medium. The in-vehicle system includes a media source for mounting in the vehicle and the media source is configured to extract the first audio program, the second audio program, and the video program substantially simultaneously from the medium. The in-vehicle system further includes a control system for mounting in the vehicle and is configured to control whether the media source extracts the first audio program, the second audio program, or the first and second audio program from the medium. The control system is further configured to control which of the first and second audio programs are provided to the first audio interface. The control system is further configured to control which of the first and second audio programs are provided to the second audio interface.
[0008] Another embodiment relates to a system for mounting in a vehicle for configuring a transmitter. The system includes an interface for simultaneously receiving a first audio source signal and a second audio source signal. The system further includes a transmitter configured to wirelessly transmit signals representing audio information to an audio output device. The transmitter is configured to transmit a first wireless signal representing the first audio source signal on a first channel or a second wireless signal representing the second audio source signal on a second channel. The interface for receiving the source select signal may be coupled to a channel select line. The channel select line may be configured for selective coupling to ground or not coupled to ground. Coupling the channel select line to ground may correspond to a selection of the second audio source signal for transmission. The transmitter may be an infrared (IR) transmitter including an IR modulator. The IR modulator may include an interface for receiving a source select signal. The IR modulator may be configured to determine, based on the source select signal, which of the first audio source signal and the second audio source signal to use during operation. [0009] Alternative exemplary embodiments relate to other features and combinations of features as may be generally recited in the claims.
BRIEF DESCRIPTION OF THE FIGURES
[0010] The application will become more fully understood from the following detailed description, taken in conjunction with the accompanying figures, wherein like reference numerals refer to like elements, in which:
[0011] FIG. 1 is a perspective view of a vehicle, according to an exemplary embodiment; [0012] FIG. 2 is a front view of a control system human-machine interface in a vehicle, according to an exemplary embodiment;
[0013] FIG. 3 is a block diagram of a control system for mounting in a vehicle, according to an exemplary embodiment; <
[0014] FIG. 4 is a more detailed block diagram of a control system for mounting in a vehicle, according to an exemplary embodiment;
[0015] FIG. 5 is an illustration and block diagram of a vehicle interior from the top down, according to an exemplary embodiment; [0016] FIGS. 6A-6D are block diagrams of exemplary configurations of rear seat entertainment systems; [0017] FIG. 7 is a flow chart of a process for providing audio streams of different languages to two different audio interfaces, according to an exemplary embodiment;
[0018] FIG. 8A is a block diagram of a system for switching sources and/or configuring a common interface based on a channel select line or signal, according to an exemplary embodiment;
[0019] FIG. 8B is a flow chart of a process for switching sources and/or configuring a common interface based on a channel select line or signal, according to an exemplary embodiment;
[0020] FIGS. 9A and 9B are block diagrams of a system for switching sources and/or configuring a common interface based on a channel select line or signal, according to an exemplary embodiment where the system is an infrared system;
[0021] FIG. 10 is a block diagram of a system for providing a preprocessing task to a signal to be provided to an audio output interface in a vehicle, according to an exemplary embodiment;
[0022] FIG. 1 IA is a flow chart of a process for providing a processing task to a signal to be provided to an audio output interface in a vehicle, according to an exemplary embodiment; and
[0023] FIG. 1 IB is a flow chart of a process for providing a processing task to a signal to be provided to an audio output interface in a vehicle, according to another exemplary embodiment.
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
[0024] Before turning to the figures which illustrate the exemplary embodiments in detail, it should be understood that the application is not limited to the details or methodology set forth in the following description or illustrated in the figures. It should also be understood that the phraseology and terminology employed herein is for the purpose of description only and should not be regarded as limiting.
[0025] Referring generally to the FIGS., an audio and/or video system such as a rear seat entertainment (RSE) system for mounting in a vehicle is provided. The RSE is configured to receive input from a number of media sources and may include transmitters for wirelessly transmitting audio information to one or more wireless headphones. The RSE may be configured to operate with any number of vehicle subsystems to provide multi-language output options, preprocessing features, and/or systems and methods for switching sources and/or channels. Although the rear seat communications are described below, the present application is not limited to rear seat applications unless literally stated in the claim. The systems or methods described herein can be implemented in any audio and/or video system for mounting to a vehicle.
[0026] Referring to FIG. 1, a vehicle 100 includes a number of subsystems for user convenience. Vehicle 100 generally includes a heating, ventilation, and air-conditioning (HVAC) system, a sound system, and an in- vehicle control system 106. The HVAC system and sound system may be coupled to in- vehicle control system 106, which is capable of controlling and monitoring both systems, automatically or by a manual user command. It is noted that in various exemplary embodiments vehicle 100, the HVAC system, and the sound system may be of any past, present, or future design that is capable of housing (in the case of vehicle 100) and interacting with in- vehicle control system 106. [0027] Referring to FIG. 2, an exemplary embodiment of in-vehicle control system 106 is shown. Control system 106 generally includes an output display 108, one or more knobs 110, one or more pushbuttons 112, and one or more tactile user inputs or pushbuttons 114, which facilitate controlling various vehicle functions. Output display 108 may be configured to display data related to the control of the vehicle functions. In one exemplary embodiment, output display 108 may be a touch-screen display, while in other exemplary embodiments, may be any other non-touch sensitive display. In still other exemplary embodiments, output display 108 may be of any technology (e.g. LCD, DLP, plasma, CRT), configuration (e.g. portrait or landscape), or shape (e.g. polygonal, curved, curvilinear). Knobs 110 and pushbuttons 112 and 114 may be configured to control functions of the HVAC system such as fan speed, cabin temperature, or routing of air flow; to control playback of media files over the sound system; to control retrieval of phonebook entries; or to control any other desired vehicle function.
[0028] Pushbuttons 114 typically allow for the selection and display of various functions of in-vehicle control system 106 including HVAC system control, sound system control, hands-free phone use, contact or address/phone book management, calendar viewing/modification, and vehicle data logging. The operation of pushbutton 114 for media playback may display a media playback menu screen or execute commands that allow the user to view, select, sort, search for, and/or play audio or video files by tactile or oral command. The operation of pushbutton 114 for hands-free phone operation may display a menu screen or execute commands that allows the user to connect control system 106 to a mobile phone so that speaking into the vehicle console of control system 106 operates the mobile phone. The operation of pushbutton 114 for HVAC control may display a menu screen or execute commands that allow the user to control cabin temperature and air flow by tactile or oral command. The operation of pushbutton 114 for contact management may display a menu screen or execute commands that allow the user to view, list, select, sort, search for, edit and/or dial one or more entries containing personal contact information, by use of a tactile or oral command. The operation of pushbutton 114 for calendar management may display a menu screen or execute commands that allow the user to view, list, select, sort, search for, edit and/or create one or more entries containing personal schedule information by tactile or oral command. The operation of pushbutton 114 for vehicle log management may display a menu screen or execute commands that allow the user to input, view, select and/or reset information related to vehicle operation (e.g. fuel economy, engine temperature, distance to empty, etc.) by tactile or oral command. [0029] Pushbutton 114 may also be used to navigate through and/or select rear seat or audio processing menu items and graphical user interface elements. For example, shown on display 108 are the options "select audio language options" and "rear seat processing." [0030] Referring to FIG. 3, in-vehicle control system 106 is capable of accessing data files from a remote source 116 over a communication link 118. For example, in-vehicle control system 106 may access media data files, phonebook data files, calendar data, or any other accessible data of use by in-vehicle control system 106. Media files may include audio and/or video files having multiple audio programs (e.g., having multiple language programs).
[0031] In-vehicle control system 106 generally includes a communication device 120, a data processing system 122, a display driver 124, a user interface 126, an audio input device 128, an audio output device 130, and a memory device 132.
[0032] Communication device 120 is generally configured to establish communication link 118 with remote source 116. In one exemplary embodiment, in-vehicle control system 106 may establish a wireless communication link such as with Bluetooth communications protocol, a WiFi protocol, an IEEE 802.11 protocol, an IEEE 802.16 protocol, a cellular signal, a Shared Wireless Access Protocol-Cord Access (SWAP-CA) protocol, or any other suitable wireless technology. In another exemplary embodiment, in-vehicle control system 106 may establish a wired communication link such as with USB technology, Firewire technology, optical technology, other serial or parallel port technology, or any other suitable wired link. Communications device 120 may receive one or more data files from remote source 116. In various exemplary embodiments, the data files may include text, numeric data, audio data, video data, audio and video data, graphical data, compressed data, encoded data, modulated data, or any combination thereof.
[0033] Data processing system 122 is coupled to communications device 120 and is generally configured to control each function of in- vehicle control system 106. Data processing system 122 preferably facilitates speech recognition capabilities of in- vehicle control system 106 for the convenience of the user. Data processing system 122 may include digital or analog processing components or be of any past, present, or future design that facilitates control of in- vehicle control system 106.
[0034] Display driver 124 is coupled to output display 108 and is typically configured to provide an electronic signal to the output display. In one exemplary embodiment, the electronic signal may include the text and/or numeric data of the data files, while in other exemplary embodiments, any other desired data may be included with the text and/or numeric data or by itself in the electronic signal to the output display. In another exemplary embodiment, display driver 124 may be configured to control output display 108 with touch-screen capabilities, while in other exemplary embodiments, display driver 124 may be configured to control display 108 without making use of touch-screen capabilities. In still other exemplary embodiments, display driver 124 may be of any past, present, or future design that allow for the control of output display 108.
[0035] User interface 126 is typically configured to facilitate tactile user interaction with in- vehicle control system 106. In various exemplary embodiments, user interface 126 may include pushbuttons or rotatable knobs as in the exemplary embodiment of FIG. 2 in any similar or dissimilar configuration or may include other tactile user contact points. User interface 126 may allow the user to interact with any of the rear seat entertainment systems, processing systems, switching systems, multi-language selection systems, herein described. [0036] Audio input device 128, for example a microphone, is configured to receive the utterance of a user for transmission to data processing system 122 for speech recognition so that the functions of in-vehicle control system 106 may be operated by voice command. Audio output device 130, for example a built-in speaker, is configured to provide the user with an audio prompt of various functions, such as user selection confirmation. [0037] Memory device 132 is configured to store data accessed by in-vehicle control system 106. For example, memory device 132 may store data input by remote source 116, data created by data processing system 122 that may be used later, intermediate data of use in a current calculation, or any other data of use by in- vehicle control system 106. [0038] Control system 106 is shown to be coupled to first media source 510 and second media source 511. Control system 106 is further shown coupled to first rear seat entertainment system 402 and second rear seat entertainment system 403. Control system 106 may generally be configured to serve as a receiver or processing system for rear seat entertainment systems 502 and/or 503. For example, control system 106 may controllably route audio signals from media source 510 and/or 511 to rear seat entertainment system 402 and/or 403. According to other exemplary embodiments, audio signals received from audio input device 128, communications device 120, and/or audio system 104 may be routed to one or more rear entertainment system 502, 503. Processing system 122 may be configured to decode and play audio files stored on memory device 132, routing the resulting audio signal to one or more rear entertainment system 402, 403.
[0039] Referring to FIG. 4, in- vehicle control system 106 and remote source 116 are illustrated in greater detail. Data processing system 122 generally includes a text-to- grammar device 134, a speech recognition device 136, and a text-to-speech device 138. [0040] Text-to-grammar device 134 is preferably coupled to communications device 120 and is generally configured to generate a phonemic representation of the text and/or numeric data of each of the data files received by communications device 120 from remote source 116. The phonemic representation of the text and/or numeric data of each data file may be configured to facilitate speech recognition of each data file. After conversion of a data file to a phonemic representation, the data file may be accessed via an oral input command received by speech recognition device 136 via audio input device 128. [0041] Speech recognition device 136 is typically configured to receive an oral input command from a user via audio input device 128. Speech recognition device compares the received oral input command to a set of predetermined input commands, which may have been configured by text-to-grammar device 134. In various exemplary embodiments, the input commands may be related to the playback of a media file, the dialing or input of a phone book entry, the entry or listing of calendar or contact data, the control of the HVAC system, or any other desired function to be performed on data. Speech recognition device 136 may determine an appropriate response to the oral input command received from the user, for example, whether the oral input command is a valid or invalid instruction, what command to execute, or any other appropriate response. [0042] Text-to-speech device 138 is generally configured to convert the text and/or numeric data of each data file received from remote source 116 into an audible speech representation. This functionality may allow in-vehicle control system 106 to audibly give data to the user via audio output device 130 or the sound system. For example, in-vehicle control system may repeat a user selected function back to the user, announce media file information, provide phonebook or contact information, or other information related to data stored in memory 132 or remote source 116.
[0043] Memory device 132 includes both a volatile memory 140 and a non- volatile memory 142. Volatile memory 140 may be configured so that the contents stored therein may be erased during each power cycle. Non- Volatile memory 142 may be configured so that the contents stored therein may be retained across power cycles, such that upon system power-up, data from previous system use remains available for the user. [0044] Note that remote source 116 may be any suitable remote source that includes a transceiver and is able to interface with in-vehicle control system 106 over communications link 118, in either a wireless or wired embodiment. In various exemplary embodiments, remote source 116 may be one or more of a mobile phone 144, a personal digital assistant (PDA) 146, a media player 148, a personal navigation device (PND) 150, a remote server 154 that is coupled to the internet, or various other remote data sources.
Multiple Languages from a Media Program Simultaneously to Passengers in a Vehicle [0045] Referring generally to the FIGS., and FIGS. 5-7 in particular, a system is provided for providing multiple languages from a media program to passengers in a vehicle. The system utilizes a first media source 510 or a second media source 511 (e.g., a DVD-ROM drive, a hard disk, flash memory, etc.) to extract a video signal and multiple audio signals from media (e.g., a DVD ROM disk, a group of files associated with a media program, etc.). Control system 106 is provided to perform decoding of more than one audio signal (e.g., audio stream, audio track, etc.) simultaneously (or near simultaneously). Control system 106 is further configured to provide the video signal and a first decoded audio signal to a first display 504 and a first audio interface 502. Control system 106 provides the video signal and a second decoded audio signal to a second display 506 and a second audio interface 503. As a result, it is possible for one passenger to watch a video and to hear associated audio in one language while another passenger watches the same video and hears associated audio in another language. According to various exemplary embodiments, control system 106 is configured to receive media from a remote source via a wired or wireless connection, extract the media, and provide the media to displays and/or audio interfaces.
[0046] FIG. 5 shows an example of a vehicle having a rear seat entertainment (RSE) system. The RSE is shown to include (or is operatively coupled to) control system 106. The RSE includes a first audio interface 502, a second audio interface 503, a first display 504, a second display 506, and an overhead display 508.
[0047] First display 504 and second display 506 are shown as seatback displays. It should be understood, however, that the displays (or additional or alternative displays) may be provided at any location in the vehicle.
[0048] Control system 106 may be coupled to one or more media source 510, 511. According to an exemplary embodiment, media source 510 is a DVD player. According to a preferred embodiment, media source 510 is a DVD-ROM drive configured with a processing system and a read mechanism suitable for reading and decoding multiple audio tracks and a video track simultaneously (i.e., near simultaneously). [0049] Additional or alternative media sources may be provided and may include, for example, AM/FM radio devices, satellite radio devices, CD players, MP3 players, VCR players, a game console, television tuners, GPS receivers, etc. Control system 106 can distribute video information from media source 510 to displays 504, 506, and/or 508. Control system 106 can also distribute audio information from media source 510 to vehicle audio system 104, first audio interface 502, and/or second audio interface 503. Control system 106 may be configured to conduct any number of switching, modulating, encoding and/or decoding tasks to distribute the audio and/or video. Control system 106 may be configured to distribute the audio and/or video content in accordance with selection signals received from a passenger entered via a user interface element. The user interface element can be a touch panel on a display (e.g., display 108, display 504, 506, and/or 508, etc.), buttons, knobs, switches, dials, etc. Examples of such interface element are shown in FIG. 2 (e.g., elements 112, 114). The user interface can also be implemented as a remote control. The remote control may be connected to control system 106 via a wired connection or wirelessly. In response to the selection signal, the media control unit sends the media content of the selected media source to a selected display and/or audio interface. For example, a rear seat passenger can use a touch panel on the seatback display to select a movie from a DVD player to be played on his or her seatback display. [0050] According to an exemplary embodiment, an audio interface and/or the control system may be integrated with a rear seat display 504, 506. For example, circuitry for audio interface 502 may be coupled to the circuitry for display 504; the circuitry may be coupled to the same housing, the audio interface and the display may be coupled to the same housing, etc. An audio interface 502 or 503 may include any number of jacks or terminals for connecting a personal audio output device (e.g., a portable speaker, headphones, etc.). An audio interface may also or alternatively include one or more speakers. Audio interface 502 or 503 may also (or alternatively) include or be coupled to a wireless transmitter for sending audio output signals to a personal wireless audio output device (e.g., wireless portable speaker, wireless headphones, etc.).
[0051] Referring to FIGS. 6A-6D, block diagrams of various exemplary embodiments of a rear seat entertainment system are shown. Control system 106 is shown in operative communication with one or more media sources 510. Media source 510 is configured to send multi-channel audio and/or video to control system 106. Referring to FIGS. 6A and 6B, control system 106 is shown coupled to display 504 and display 506. According to an exemplary embodiment shown in FIG. 6 A, control system 106 is directly coupled to audio interface 502 and 503. According to an exemplary embodiment shown in FIG. 6B, intermediate device 620 is provided between control system 106 and audio interfaces 502, 503. Intermediate device 620 may provide any number of audio decoding, audio processing, distributing, and/or audio switching features to the audio signals output from control device 106. According to an exemplary embodiment shown in FIG. 6C, audio interfaces 502 and 503 are closely coupled to or provided on and/or within an integrated display and audio unit 632, 634. According to an exemplary embodiment shown in FIG. 6D, control system 106 provides audio and/or video output to rear seat module 642. Rear seat module 642 may be an integrated system having a display system 508, a first audio interface 502, second audio interface 503, and extractor 644. Extractor 644 is generally configured to receive a multi-channel audio and video signal from control system 106, to extract a first audio signal from the received signal, to provide the first audio signal to first audio interface 502. Extractor 644 may also be configured to extract a second audio signal from the received signal, to provide the second audio signal to second audio interface 503, and/or to extract an video signal and to provide the video signal to display system 508. [0052] It should be noted that various other embodiments are possible, including, for example, embodiments where control system 106 is not included and one or more media sources are coupled directly to one or more audio interfaces, one or more display systems, one or more rear seat modules, one or more intermediate devices, and/or one or more extractors. Further, it should be noted that any number of intermediate devices not shown might be provided between the various components of FIGS. 6A-6D, different connection topologies may be provided (e.g., audio interface and/or display connections may be provided in series rather than parallel).
[0053] Referring now to FIG. 7, process 700 is shown for providing multiple decoded audio signals to a plurality of audio interfaces, according to an exemplary embodiment. One or more user interface elements and/or control systems in a vehicle may be configured to receive a selection signal or signals from a user (step 702). The selection signals may relate to language selection, audio track selection (e.g., surround sound, stereo, commentary track, etc.), a parental control track, etc. One or more media sources may then be commanded or requested to read media according to the received selection signals (step 704). The reading activity may include reading multiple layers of a disk, utilizing more than one reading element (e.g., multiple magnetic head, multiple lasers), etc. The reading activity may further include reading multiple audio streams and one or more video streams. As a first audio stream is read from the media in step 704, a control system, intermediate device, extractor, or otherwise may extract a first audio signal (step 706) from the first audio stream in parallel with (or nearly in parallel with) the extraction of a second audio signal from a second audio stream (step 708). The same or a different system or device may also extract a video signal from a video stream (step 707). Once the streams have been extracted (or semi-extracted and/or buffered), a system or device may then simultaneously (i.e., near simultaneously) decode the audio streams (steps 716, 718) and the video stream (step 717). Decoding may include running one or more algorithms according to a standard or proprietary codec. According to an exemplary embodiment, the system or device responsible for decoding is configured to decode formats such as Dolby Digital, DTS, PCM, MP3, and/or another similar or suitable audio codec. Once the signals are decoded, a first decoded audio signal may be provided to a first audio interface (step 726), a second decoded audio signal provided to a second audio interface (step 728), and/or a decoded video signal provided to a display system (step 727). The audio interfaces may then transmit the audio signals to any number of suitable devices via a wired and/or wireless technology. Selecting Between Two Transmission Channels
[0054] Referring to FIG. 8A, a system 800 for providing audio information from a first and second audio source to one or more audio output devices is shown, according to an exemplary embodiment. An audio interface 806 may be or includes a wireless transmitter 808 (or transceiver) and associated electronics. Wireless transmitter 808 may be an infrared transmitter, a microwave transmitter, a radio frequency (RF) transmitter, or otherwise. Audio interface 806 may be configured to receive multi-channel data from the media source, control system, or another upstream device. According to the embodiment shown in FIG. 8A, interface 806 is configured to receive stereo input from a first audio source 802 and stereo input from a second source 804. According to an exemplary embodiment, a system is provided for allowing wireless transmitter 808 and associated electronics to switch between audio sources 802, 804 and transmission channels based on a user or system selection. [0055] Referring further to FIG. 8 A, audio output device 814 is a wireless headphone device having a receiver 816, according to an exemplary embodiment. Audio output device 814 could also be wired headphones, a wireless portable speaker, a personal display device, or otherwise. Channel select interface 810 is shown operatively coupled to interface 806. Channel select interface 810 receives input from a channel select line 812. Channel select line 812 may be a single-conductor wire, a circuit, a multi-conductor wire, or otherwise. Interface 806 is generally configured to switch between using input from first audio source 802 and second audio source 804 based on the state of channel select line 812. Further, interface 806 is configured to change a modulation scheme, coding scheme, or channel output scheme based on the state of channel select line 812. [0056] Referring to FIG. 8B, a process 850 is shown for operating system 800, according to an exemplary embodiment. Interface 806 may generally receive a channel select signal (step 852). Based on the state (and/or the contents of the signal), interface 806 is configured to switch audio input (step 854). The state or contents of the channel select signal may also be used by the interface to configure transmitter 808 (step 856). The audio input may be modulated according to the received channel select signal and/or the changed transmitter configuration (step 858). The wireless transmitter may then transmit on channels according to the received channel select signal and/or transmitter configuration (step 860). [0057] Referring to FIG. 9A, a system 900 for selecting and/or switching between multiple provided channels is shown, according to an exemplary embodiment where the transmitter is an infrared (IR) transmitter. For the exemplary infrared embodiment shown in FIG. 9A, the system includes an IR modulator 902, an IR LED driver 904, and an IR LED 906. The system further includes an IR module connector 910. [0058] IR modulator 902 is configured to transmit over a first channel and a second channel. Each of the first and second channels may further include multiple sub-channels (e.g., a right channel for stereo audio and a left channel for stereo audio). According to an exemplary embodiment, the IR modulator 902 is configured to transmit the left channel of the first channel at 2.3 MHz and the right channel of the first channel at 2.8 MHz. Further, the IR modulator 902 is configured to transmit the left channel of the second channel at 3.2 MHz and the right channel of the second channel at 3.8 MHz. IR modulator 902 is configured to send one of a transmission according to the first channel and a transmission according to the second channel to IR LED driver 904. IR LED driver 904 is configured to drive IR LED (or LEDs) 906 to transmit a single stereo channel to a receiver (e.g., a wireless headphone).
[0059] IR module connector 910 is configured to couple IR modulator 902 to an upstream device such as a media source or a control system. IR modulator 902 is coupled to IR module connector 910 by a multi-channel audio line 908, a left audio line, a right audio line, a common (COM) audio line and a channel select line. It is important to note that any number of audio lines may be used to transmit audio information to IR modulator 902. For example, a differential or balanced connection may be provided between IR modulator 902 and module connector 910 whereby a single common return wire is replaced by an inverted signal wire for each signal (e.g., left+, left-, right+, right-).
[0060] According to an exemplary embodiment, IR modulator 902 includes an interface 911 for receiving a channel select line 912. Based on information received at interface 911 and/or 914, IR modulator 902 is configured to determine whether the transmitter should transmit a signal over the first channel or the second channel. According to one exemplary embodiment, to set the channel select line, one of the pins or lines in or upstream of IR module connector 910 corresponding to the channel select line can be set either to open or to ground. For example, if pin 950 is not connected (i.e., left open), then IR modulator 902 may be configured to use first stereo inputs corresponding to the first channel and to modulate the audio on the first stereo inputs according to frequencies corresponding to the first channel (e.g., 2.3 MHz, 2.8MHz, etc.). If pin 950 is connected to ground, then IR modulator 902 may be configured to use first stereo inputs corresponding to the second channel and to modulate the audio on the second inputs according to frequencies corresponding to the second channel (e.g., 3.2 MHz, 3.8 MHz, etc.). [0061] According to other exemplary embodiments, an IR modulator is configured to use other mechanisms or interfaces to make a channel determination. For example, IR modulator 902 may further include an interface for receiving a channel select signal from control system 106. Based the channel select signal, IR modulator 902 determines over which channel to modulate and transmit the audio signal. It is possible for the control signal to be received by only one of the IR modulators such that the IR modulator receiving the control signal modulates over the first stereo channel and the IR modulator not receiving the control signal modulates over the second stereo channel.
[0062] Referring to FIG. 9B, a block diagram of a second system 951 is shown, according to an exemplary embodiment. Second system 951 may be provided in the same vehicle as system 900. According to an exemplary embodiment, components 958 through 964 are duplicates (or near duplicates) of the corresponding components shown in FIG. 9A. For example, IR modulator 952 and 902 generally include the same hardware circuitry and/or logic. System 951 differs from system 900 in that pin 950 is not provided to system 951. Accordingly, interface 964 is not coupled to ground (i.e., open). IR modulator 952 will select an input channel that is not the same channel selected by IR modulator 902 and may modulate and/or transmit signals provided to LED driver 904 differently.
A System for Preprocessing Vehicle Audio
[0063] Referring now to FIG. 10, a block diagram of a system 1100 for mounting in a vehicle is shown, according to an exemplary embodiment. The system includes source interface system 1102, a processing system 1104, a first output interface 1106, and a second output interface 1108.
[0064] Source interface system 1102 is generally for coupling to various audio and/or video sources such as first audio source 1116 and second audio source 1118. [0065] First output interface 1106 and second output interface 1108 are generally for coupling to audio output devices such as first audio output device 1126 and second audio output device 1128. First output interface 1106 and second output interface 1108 are generally configured to send or transmit audio signals to output devices 1126 and 1128. [0066] Processing system 1104 generally receives one or more source signals from audio sources 1116 and/or 1118 via source interface system 1102. Processing system 1104 is generally configured to conduct a processing task on the one or more source signals to create a processed signal and to send the processed signal to the first output interface 1106, the second output interface 1108, or to the first and second interfaces 1106 and 1108. [0067] According to an exemplary embodiment, processing system 1 104 can be implemented in whole or in part, for example, as a field programmable gate array (FPGA), an application specific circuit (ASIC), a digital signal processor (DSP), and/or another type of processing device capable of processing audio signals, such as a general purpose microprocessor configurable via computer code stored in memory 1134. Processing system 1104 can be configured to perform one or more audio processing tasks on the audio signals received from audio source 1116, 1118 and/or source interface system 1100. The audio processing tasks performed by the processing system 1104 can be fixed, i.e., the same functions are always performed, or can be selectable. In addition, processing system 1104 can be configured so that audio signals from multiple sources can be received and processed simultaneously (or near simultaneously).
[0068] Referring further to FIG. 10, source interface system 1102 may include a first receiving element (e.g., interface) 1130 and a second receiving element (e.g., interface) 1132. Receiving elements 1130 and/or 1132 may include any combination of hardware and/or software configured to receive signals from audio sources. For example, receiving elements 1130 and/or 1132 may include any number of jacks, terminals, hardware filters, software filters, communication stacks, software drivers, or the like. Receiving elements 1130 and/or 1132 may be wired elements for physically coupling to one or more audio sources. According to other exemplary embodiments, receiving elements 1130 and/or 1132 may include a radio frequency (RF) receiver configured to receive wireless communications (e.g., near-field transmissions, WiFi, Bluetooth, WiMax, Cellular, etc.) from an RF transmitter associated with one or more sources. According to yet other exemplary embodiments, receiving elements 1130 and/or 1132 may be configured to receive audio signals via any wired or wireless technology of the past, present or future. Further, receiving elements 1130 and/or 1132 may also be configured to receive source signals from any number of audio sources simultaneously (or near simultaneously). Sources 1116, 1118 may be any type of device configured to communicate an analog and/or digital audio (and/or video) signal to another device. For example, a source 1116, 1118 could be a DVD player, a CD player, an AM/FM radio, a satellite radio, a digital media player, a portable media player, a mobile phone having audio output capabilities, a PDA having audio output capabilities, etc. Downstream of receiving elements 1130 and/or 1132, source interface system 1102 may include any number of switches for handling multiple sources, filter networks, amplifiers, stepping devices, decoders, drivers, buffers, energy storage devices, noise filters, signal de-coupling elements, communication stacks, etc. It should be noted that source interface system 1102 may be multiple interfaces, circuits, or components and may be integrated with processing system 1104 or another component of system 1100. [0069] Referring still to FIG. 10, processing system 1104 is shown to include memory 1134, according to an exemplary embodiment. Processing system 1104 may be configured to store user settings, headphone settings, setup information, and/or any other configuration information in memory 1 134. Memory 1134 may be volatile and/or non-volatile. According to a preferred embodiment, memory 1134 includes at least a portion of nonvolatile memory for storing information even if processing system 1104 loses power or is turned off. According to an exemplary embodiment, memory 1134 may be used for storing computer code or instructions relating to one or more processing tasks of the processing system 1134.
[0070] Referring further to FIG. 10, processing system 1104 is shown to include a volume processing module 1136, a noise cancellation module 1138, a sound level protection module 1140, an equalization module 1142, and a surround sound module 1148. It is important to note that modules 1136, 1138, 1140, 1142, and 1148 may all coexist in the same processing system or some modules may not be included in the processing system. Modules 1136, 1138, 1140, 1142, and 1148 may generally be hardware and/or software elements configured to facilitate and/or conduct the one or more processing tasks of the processing system. For example, modules 1136, 1138, 1140, 1142, and 1148 may include a hardware circuit, a special purpose microcontroller, a general purpose microcontroller that has been configured for specific processing tasks, a decoding element, a converting element, or any combination thereof. According to various other exemplary embodiments, each (or some) of modules 1136, 1138, 1140, 1142, and 1148 may be self-contained components which can be interchanged with other modules. According to yet other embodiments, each (or some) of modules 1136, 1138, 1140, 1142, and 1148 are closely integrated with processing system 1104 and/or with other modules and are not easily interchanged, removed, or added to a processing system.
[0071] Referring further to FIG. 10, volume processing module 1136 includes logic for adjusting the volume of the audio signals provided to the output interfaces and/or the output devices. For example, volume processing module 1136 may be configured to reduce a level of a source signal provided to processing system 1104, to reduce the level of a signal provided to first output interface 1106, and/or to reduce the level of a signal provided first audio output device 1126. Volume processing module 1136 may be configured for speed sensitive volume processing. Speed sensitive volume processing adjusts the volume of the audio signal in accordance with the speed of the vehicle (or some related metric). Volume processing module 1136 may be configured to step the level of the entire signal up or down, or to effect only certain frequency ranges. For example, volume processing module 1136 may be configured to boost the bass region (e.g., 20Hz to 100Hz, etc.) of an audio signal when the vehicle is traveling at higher speeds. Volume processing module may include any number of low pass, high pass, and/or band pass filters configured to assist in boosting or cutting one portion of the signal relative to the rest of the signal.
[0072] Referring further to FIG. 10, processing system 1104 is shown to include a data bus interface 1146 for operatively coupling processing system 1104 to data bus 1154, according to an exemplary embodiment. Processing system 1104 (and/or a module thereof) may use information or signals available at data bus interface 1146 for one or more processing tasks. For example, volume processing module 1136 may use information available at data bus interface 1146 to determine how to adjust the volume level. Data bus 1154 may be coupled to a vehicle control system 106, an engine electronic control unit (ECU) 1160, a wheel speed sensor 1162 and/or any number of additional vehicle devices or vehicle subsystems (e.g., a vehicle navigation system). According to an exemplary embodiment, volume processing module 1136 may receive an indication of engine speed and determine whether to adjust volume based on the engine speed. According to other exemplary embodiments, volume processing module 1136 does not make a determination regarding whether or not to adjust volume based on the engine speed, but rather determines whether to adjust volume based on user input. If user input indicates that volume processing should be enabled, volume processing module 1136 may constantly be processing source audio and adjusting a level based on engine speed. [0073] Referring further to FIG. 10, processing system 1104 is shown to include a noise cancellation module 1138, according to an exemplary embodiment. Noise cancellation module 1138 is generally configured to reduce ambient background noise audible to a listener of an audio output device (e.g., audio output device 1 126, etc.). Cancelling noise can be accomplished via electronic circuitry and/or via software. Processing system 1104 is shown to include a microphone interface 1144 for receiving microphone signals from a microphone 1152. Microphone 1152 may generally be for coupling to the vehicle interior in one or more places so that the microphone signal detected by the microphone is representative of the audible frequencies a passenger listening to an audio output device might hear. For example, the microphone may be coupled to the vehicle and placed in a rear headrest location proximate the ears of a passenger of average height. Microphone 1152 may be configured to detect the noise, change the detected noise to a microphone signal, and to provide the signal to microphone interface 1144. Noise cancellation module 1138 may then process the microphone signal and/or use the microphone signal to process another signal. For example, noise cancellation module 1138 may add a signal out-of-phase with the microphone signal to the source signal so that the signal provided to the audio output device includes a component that cancels the noise detected by the microphone. Noise cancellation module 1138 can be used to eliminate noise in the audio signal as output from an audio source as well. Ambient sounds (such as from the engine or from road noise (e.g., from the tires) may also be cancelled.
[0074] Referring still to FIG. 10, noise cancellation module 1138 may also be configured to use a signal from a second audio source (e.g., source 1118) during the noise cancellation activity. For example, vehicle occupants in the front seat may be listening to the radio or a CD in the front seat through vehicle speakers while passengers in the rear seat can be watching a video and listening to the video with wireless headphones. Although the radio or CD may be playing only on speakers in the front of the vehicle, the sound from the speakers may permeate to the rear of the vehicle and can interfere with the ability of the rear seat passengers to hear the video clearly through the wireless headphones. Noise cancellation module 1138 may be configured to eliminate or limit this interference by using the signal from the source playing over the front speakers. In particular, noise cancellation module 1138 can use the signal from the radio or CD playing over the front speakers to create a signal exactly out of phase (or nearly out of phase) with the signal from the radio or CD, and thus cancel it out. It is important to note that this sort of "source cancellation" may be used in conjunction road noise cancellation activities relating to microphone interface 1144. Further, noise cancellation module 1138 may implement or utilize delay logic or electronics to account for the time delay in sound traveling from the front speaker to the rear listener's location. For example, rather than immediately providing an inverted "front source" signal to the audio provided to a rear output device, the noise cancellation module may be configured to delay the inverted signal for some amount of time (e.g., likely in milliseconds).
[0075] Referring further to FIG. 10, processing system 1104 is shown to include sound level protection module 1140, according to an exemplary embodiment. Sound level protection module 1140 may be configured to limit the volume of the audio signal that can be output to a user's audio output device (e.g., a user's personal headphones). This limiting of the volume of the audio signal helps to protect against damage to a person's hearing that may occur if an excessively loud signal is played back through the headphones. Sound level protection module 1140 may include hardware circuitry and/or software logic to determine that a signal is (or will be) excessively loud to a user. The logic of sound level protection module 1140 may consider, for example, a headphone profile stored in memory 1134 relating to a particular set of headphones. The profile may contain information such as headphone sensitivity, headphone type, the impedance of the headphones, etc. For example, an in-ear monitor type headphone with high sensitivity and/or low impedance might require sound protection earlier than a pair of supra-aural headphones having low sensitivity and/or high impedance. Memory 1134 may be configured to store a database of headphone profile information. The database may be pre-populated with profiles for many popular headphone models or types. The database may be updated via a communication interface with an external system (e.g., control system 106) via a user interface element (e.g., element 1156) or otherwise.
[0076] Referring still to FIG. 10, processing system 1104 is shown, to include equalization module 1142, according to an exemplary embodiment. Equalization module 1142 may generally be an equalization filter (hardware and/or software) configured to adjust the source signal before it is output to an output interface. Equalization module 1142 may include any number of parametric equalizers, notch filters, peaking filters, shelving filters, bandpass filters, high-pass filters, low-pass filters, a linear filter, and/or any other suitable filter type for effecting audio signals. Equalization module 1142 may be used independently of modules 1134-1140 or may be used in by or in conjunction with modules 1134-1140. For example, rather than active noise cancellation (e.g., whereby noise cancellation module 1138 uses a microphone signal to cancel noise), noise cancellation module 1138 may send one or more parameters to equalization module 1142 based on known characteristics of the vehicle's acoustic environment. Because an auto manufacturer is able to measure a car model in a test environment, the auto manufacturer may store information about which frequencies to boost or cut in certain situations. For example, if a particular car model absorbs ambient noise frequencies centered at two hundred hertz more efficiently than other frequencies, equalization module 1142 may be configured to "cut" the source signal around two hundred hertz and/or to "boost" the other frequencies so that source material centered at two hundred hertz does not appear to be too loud in comparison to other frequencies. Similarly, if volume processing module 1136 determines that the engine is operating at a relatively high speed, volume processing module 1136 might use equalization module 1142 to only boost those frequencies that might otherwise be dominated by engine noise at the high engine speed. Further, equalization module 1142 might use headphone profile information stored in memory 1134 to provide a unique equalization curve specifically for the headphone type, model, sensitivity, and/or brand being used.
[0077] Referring still to FIG. 10, processing system 1104 is shown to include a surround sound module 1148. Surround sound module 1148 may be configured to decode an encoded signal to create a surround sound effect, may process a stereo signal to create a three dimensional audio effect, widen the stereo image, and/or otherwise. For example, if the signal provided by first audio source 1116 is a Dolby Digital encoded signal, surround sound module 1148 may be configured to decode the signal into its surround sound parts and then to create a. three dimensional audio effect based on the decoded information. [0078] Referring yet further to FIG. 10, processing system 1104 is shown to include user interface (UI) element interface 1150, according to an exemplary embodiment. UI element interface 1150 may be configured to receive input and/or send output to one or more UI elements 1156. A UI element 1156 may include, for example, a touch screen, a switch, a knob, a button, an indicating light, a display screen, a voice recognition device, etc. Processing system 1104 is generally configured to modify its behavior (or the behavior of one or more modules) based on UI signals received at UI element interface 1150. By way of example, a user may use one or more switches to toggle processing system "on" or "off." Further, a user may use one or more UI elements to turn noise cancellation "off but to leave sound level protection "on." IfUI elements are dials, sliders, touch screen elements, or elements that may be displayed on a detailed screen, for example, processing system 1104 may be configured to controllably vary the amount of sound protection, volume processing, or noise cancellation provided to the source signal. For example, a UI element may be configured to "turn up" or increase the noise cancellation activity. UI element 1156 may also be used to enter information regarding the specific audio output device to be used or to select an audio output device or a characteristic thereof from a menu provided to a display. For example, a user may be able to select "Apple iPod Ear Buds" from a menu. The processing system 1 104 may then adjust its behavior based upon known characteristics of iPod ear buds stored in memory 1134. Alternatively, iPod ear buds may only be associated with a headphone "type" or other single descriptor, and processing system 1104 may be configured to operate according to that descriptor. For example, a database may associate iPod ear buds with an "ear bud" type. The "ear bud" type may be associated with a certain equalization curve common to many ear buds, a highly sensitive sound protection mode, no noise cancelling, a reduced level of volume processing, and/or no surround sound simulation. Every type or may be associated with the same or different setting combinations. UI element 1156 may be used to change the different setting combinations for audio output device types, headphone brand, headphone model, headphone sensitivity, headphone impedance, user preferences, or otherwise. Settings are generally stored in memory 1134, recalled later by processing system 1104 after user feedback or device recognition, or otherwise.
[0079] Referring still to FIG. 10, output interfaces 1106 and 1108 are shown, according to an exemplary embodiment. Output interfaces 1106 and 1108 are shown to include a wired terminal 1166, 1168 and a wireless terminal 1167, 1169, respectively. Wired terminal 1166, 1168 may be any suitable type of headphone or audio output jack or terminal. For example, wired terminal 1166 may include a stereo three-point-five millimeter female plug. Output interfaces 1106 and 1108 may further include an amplifier, a power supply and/or other elements necessary to drive a wired audio output device. Wireless transceivers 1167, 1169 may be wireless transceivers of any technology of the past, present, and future suitable for transmitting audio information to an audio output device. For example, wireless transceivers 1167, 1169 may be Bluetooth transceivers, near field communication transceivers, RF transceivers of another type, infrared transceivers, optical transceivers of another type, or otherwise. Wireless transceivers 1167, 1169 and/or output interface 1106, 1108 may include amplifiers, power supplies, filters, modulators, and/or any additional or alternative hardware and/or software that may be used to transmit audio information from transceivers 1167, 1169 to a receiver included on audio output devices 1 126, 1128. [0080] Alternative arrangements of the audio processing system of FIG. 10 may be provided. For example, a signal audio system may include plural processing systems and wireless transmitters. According to other exemplary embodiments, there may be multiple displays for mounting in a vehicle and each display can include more a separate processing system and wireless transmitter. In this arrangement, each passenger can watch different videos while listening to the appropriate accompanying audio signal with wireless headphones. By having a separate processing system for each display unit, the configuration of the processing system can be made more simply. In addition, there can be a dedicated transmitter for each processing system. If implemented in the display, the processing system can be implemented in the same circuit (e.g., FPGA), that is used to drive the display. The processing system may also, or alternatively, be integrated with an audio interface and/or a wireless transmitting system. For example, a processing system and a wireless transmitter may be integrated into a single unit (e.g., housing, device, etc.). Some of the functions of the processing system and the wireless transmitter can be implemented in the same circuit. For example, the modulation of the audio signal for the wireless transmitter, in addition to the audio processing functions of the processing system, can be implemented in the same circuit.
[0081] In each of the configurations of the processing system described above (or otherwise), the processing system can be implemented in a centralized location, such as in the dashboard, within a display, such as an overhead or seatback display, or adjacent to a display. Similarly, the wireless transmitter can be implemented or mounted to a variety of vehicle locations, within a display, adjacent to a display, or otherwise. [0082] In accordance with this arrangement and the audio processing capabilities of the processing system, it is possible for passengers to listen to high quality audio signals while using simple or standard headphones. Because the processing system performs the audio processing functions, those functions need not be performed by the headphones, and thus simpler and less expensive headphones can be used without sacrificing sound quality (and even enhancing sound quality, in some cases).
[0083] In systems consistent with the present specification, an audio system in a vehicle includes one or more audio sources, each audio source producing an audio signal. The audio system also includes one or more processing (or preprocessing) systems that receive an audio signal from an audio source and perform one or more processing functions on the audio signal including, for example, active noise cancellation, speed sensitive volume control, and sound level protection. A wireless transmitter receives the audio signal processed by a preprocessing unit and transmits the received audio signal wirelessly to one or more wireless headphones.
[0084] Referring now to FIG. 1 IA, a process 1200 is shown for providing a processing task to a rear entertainment system, according to an exemplary embodiment. Process 1200 includes receiving a source signal from an audio source (step 1202). The audio source signal may be received in any form (e.g., analog, digital, decoded, not decoded, etc.). Once received by the processing system, the system may conduct a processing task on the source signal (step 1204). According to various exemplary embodiments, the source signal may be filtered, changed, inverted, converted, decoded, or otherwise and still be considered the source signal. Process 1200 further includes sending the processed signal to one or more audio interfaces (step 1206). The audio interfaces may conduct any number of volume reducing, volume increasing, switching, transforming, equalizing or other tasks. Process 1200 yet further includes transmitting an audio signal (via a wired connection or wirelessly) from the one or more audio interfaces to an audio output device (step 1208). [0085] Referring now to FIG. 1 IB, a process 1250 is shown for providing a processing task to a rear entertainment system, according to another exemplary embodiment. Process 1250 may define one or more processing tasks discussed above relating to process 1200. Process 1250 is shown to include receiving a user selection signal (step 1252). The user selection signal may include selecting a headphone attribute, selecting processing tasks to be conducted, selecting a user (for recalling user preferences) or otherwise. Process 1250 further includes recalling processing task setting information based on the received user selection signal (step 1254). Recalling may include accessing and/or reading memory, one or more processing tasks, one or more user feedback tasks, or otherwise. Process 1250 is further shown to include adjusting the level of a source signal based on the received user selection signal and/or recalled settings (step 1256). Adjusting the level of a source may include speed sensitive volume control, volume protection, or adjusting the default signal strength to match headphone sensitivity, or otherwise. Process 1250 further includes actively cancelling noise based on the received user selection signal and/or recalled settings (step 1258). The user selection signal may indicate whether or not turn toggle noise cancelling, may adjust a level of noise cancelling sensitivity, adjust the phase of a microphone signal mixed with the source signal, or otherwise. Process 1250 is further shown to include equalizing the source signal based on the received user selection signal and/or recalled settings (step 1260). The system may recall and/or apply an equalization curve according to the vehicle, user preference, headphone attribute, headphone selection, feedback from the microphone, or otherwise. Equalizing may include boosting certain frequencies, cutting certain frequencies, adjusting the phase of a signal, delaying certain frequencies or signal components, or otherwise. Process 1250 further includes receiving a signal from a vehicle subsystem (step 1262). The signal received from the vehicle subsystem may give an indication of vehicle speed, engine speed, environmental noise, or otherwise. Based on the received signal, the process is shown to boost a level of the source signal (step 1264). Boosting may include increasing the level of (amplifying) the entire signal, increasing the level of one or more components of the signal, cutting certain frequencies but otherwise amplifying the signal, or otherwise. [0086] It should be understood that in any of the above mentioned embodiments a plurality of displays and/or audio interfaces may be provided. According to various exemplary embodiments, more than two displays and/or interfaces are provided to a rear seat entertainment system.
[0087] While the exemplary embodiments illustrated in the figures and described herein are presently preferred, it should be understood that these embodiments are offered by way of example only. Accordingly, the present application is not limited to a particular embodiment, but extends to various modifications that nevertheless fall within the scope of the appended claims. The order or sequence of any processes or method steps may be varied or re-sequenced according to alternative embodiments.
[0088] The present application contemplates methods, systems and program products on any machine-readable media for accomplishing its operations. The embodiments of the present application may be implemented using an existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose or by a hardwired system.
[0089] It is important to note that the construction and arrangement of the rear seat audio systems and methods as shown in the various exemplary embodiments is illustrative only. Although only a few embodiments have been described in detail in this disclosure, those skilled in the art who review this disclosure will readily appreciate that many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.) without materially departing from the novel teachings and advantages of the subject matter recited in the claims. For example, elements shown as integrally formed may be constructed of multiple parts or elements, the position of elements may be reversed or otherwise varied, and the nature or number of discrete elements or positions may be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present application. The order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. In the claims, any means- plus-function clause is intended to cover the structures described herein as performing the recited function and not only structural equivalents but also equivalent structures. Other substitutions, modifications, changes and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present application.
[0090] As noted above, embodiments within the scope of the present application include program products comprising machine-readable media for carrying or having machine- executable instructions or data structures stored thereon. Such machine-readable media can be any available media which can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. [0091] It should be noted that although the figures herein may show a specific order of method steps, it is understood that the order of these steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. It is understood that all such variations are within the scope of the application. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.

Claims

WHAT IS CLAIMED IS:
L A system for mounting in a vehicle, comprising: a processing system; an interface for receiving a source signal from a media source; and a first interface for sending a first audio signal to a first portable audio output device; wherein the processing system is configured to conduct a processing task on the source signal to create a processed signal and to send the processed signal to the first interface for sending prior to the transmission of the first audio signal from the first interface.
2. The system of Claim 1, wherein the first interface includes a wireless transmitter.
3. The system of Claim 1, further comprising: a second interface for sending a second audio signal to a second audio output device, wherein the processing system is further configured to conduct a processing task on the source signal for sending to the second interface.
4. The system of Claim 1, wherein the first interface includes a wireless transmitter, the second interface includes a wireless transmitter, wherein the first and second interfaces are configured to communicate with wireless headphones.
5. The system of Claim 1, wherein the processing system includes a volume processing module configured to adjust a level of the first audio signal, the second audio signal, or the first and second audio signal, wherein the volume processing module is configured to increase the level as a signal representing speed of the vehicle increases.
6. The system of Claim 1, wherein the processing system includes a speed sensitive volume processing module, a sound level protection module, and a noise cancellation module.
7. The system of Claim 1, further comprising: an interface for receiving a microphone signal from a microphone, the microphone for mounting in the vehicle, wherein the processing system includes a noise cancellation module configured to use the microphone signal to filter noise from the first audio signal, the second audio signal, or the first and the second audio signals.
8. The system of Claim 1, further comprising: an interface for receiving a second source signal from a second audio source, the second audio source for providing audio to a vehicle audio system; wherein the processing system includes a noise cancellation module configured to conduct noise cancellation activities based in part on the second source signal.
9. The system of Claim 1, further comprising: an interface for receiving a vehicle-related signal from a vehicle data bus; wherein the processing system is configured to utilize the vehicle-related signal in order to complete the processing task.
10. The system of Claim 1, wherein the processing task includes boosting the bass of the source signal.
11. The system of Claim 1 , wherein the processing system includes a surround sound module configured to simulate surround sound using the source signal.
12. The system of Claim 1, further comprising: an third interface for receiving a signal from a user interface element; wherein the processing system is configured to adjust the processing task based on the signal received by the third interface.
13. The system of Claim 1, further comprising: memory configured to store information regarding the first audio output , device, the second audio output device, or the first and the second audio output device; wherein the processing system is configured to adjust the processing task based on the information.
14. The system of Claim 1, wherein the processing system is further configured to encode the source signal to an encoded signal according to a predetermined format and to encrypt the encoded signal with an encryption algorithm based on a unique identifier.
15. The system of Claim 1, wherein the interface for receiving a source signal is configured to receive a second source signal from the audio source, the source signal and the second source signals comprising different audio streams of relating to the same audio and/or video program; and wherein the processing system is configured to decode the source signal and the second source signal and to provide the decoded source signal to the first audio interface and the decoded second source signal to the second audio interface.
16. The system of Claim 1, wherein the interface for receiving a source signal is configured to receive a second source signal; wherein the system further comprises a channel select interface, wherein the processing system is configured select from one of the source signal and the second source signal based on a state detected at the channel select interface; and wherein the processing system is configured to cause the first interface for sending and the second interface for sending to transmit on different communication channels based on the state detected at the channel select interface.
PCT/US2007/024864 2006-12-05 2007-12-05 Rear seat audio video systems and methods WO2008070093A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US87279106P 2006-12-05 2006-12-05
US60/872,791 2006-12-05
US87449206P 2006-12-13 2006-12-13
US87449306P 2006-12-13 2006-12-13
US60/874,492 2006-12-13
US60/874,493 2006-12-13
US88369407P 2007-01-05 2007-01-05
US60/883,694 2007-01-05

Publications (1)

Publication Number Publication Date
WO2008070093A1 true WO2008070093A1 (en) 2008-06-12

Family

ID=39319610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/024864 WO2008070093A1 (en) 2006-12-05 2007-12-05 Rear seat audio video systems and methods

Country Status (1)

Country Link
WO (1) WO2008070093A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013219177A1 (en) * 2012-09-28 2014-04-03 Honda Motor Co., Ltd. Vehicle audio processing unit
WO2016142170A1 (en) * 2015-03-10 2016-09-15 Bayerische Motoren Werke Aktiengesellschaft Audio control in vehicles
WO2018035873A1 (en) * 2016-08-26 2018-03-01 华为技术有限公司 Audio data processing method, terminal device, and storage medium
EP3123613B1 (en) * 2014-03-26 2019-12-04 Bose Corporation Collaboratively processing audio between headset and source to mask distracting noise

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19744603A1 (en) * 1996-10-19 1998-04-23 Volkswagen Ag Head rest for vehicle esp. with integral audio components
EP1072474A2 (en) * 1999-07-28 2001-01-31 DaimlerChrysler AG Audio system for a vehicle
EP1415859A1 (en) * 2002-10-28 2004-05-06 Weiyang Electric Wire &amp; Cable Co., Ltd. Hands-free handset with noise reduction capability for a vehicular cellular phone
US20060068706A1 (en) * 2004-09-30 2006-03-30 Global Targe Enterprise Inc. Connector attached bluetooth wireless earphone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19744603A1 (en) * 1996-10-19 1998-04-23 Volkswagen Ag Head rest for vehicle esp. with integral audio components
EP1072474A2 (en) * 1999-07-28 2001-01-31 DaimlerChrysler AG Audio system for a vehicle
EP1415859A1 (en) * 2002-10-28 2004-05-06 Weiyang Electric Wire &amp; Cable Co., Ltd. Hands-free handset with noise reduction capability for a vehicular cellular phone
US20060068706A1 (en) * 2004-09-30 2006-03-30 Global Targe Enterprise Inc. Connector attached bluetooth wireless earphone

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013219177A1 (en) * 2012-09-28 2014-04-03 Honda Motor Co., Ltd. Vehicle audio processing unit
FR2996320A1 (en) * 2012-09-28 2014-04-04 Honda Motor Co Ltd VEHICLE AUDIO PROCESSING UNIT
DE102013219177B4 (en) * 2012-09-28 2015-12-03 Honda Motor Co., Ltd. Vehicle audio processing unit
US9306521B2 (en) 2012-09-28 2016-04-05 Honda Motor Co., Ltd. Vehicular audio processing unit and communication system including same
EP3123613B1 (en) * 2014-03-26 2019-12-04 Bose Corporation Collaboratively processing audio between headset and source to mask distracting noise
WO2016142170A1 (en) * 2015-03-10 2016-09-15 Bayerische Motoren Werke Aktiengesellschaft Audio control in vehicles
US11042348B2 (en) 2015-03-10 2021-06-22 Bayerische Motoren Werke Aktiengesellschaft Audio control in vehicles
WO2018035873A1 (en) * 2016-08-26 2018-03-01 华为技术有限公司 Audio data processing method, terminal device, and storage medium
US11477591B2 (en) 2016-08-26 2022-10-18 Honor Device Co., Ltd. Audio data processing method, terminal device, and storage medium

Similar Documents

Publication Publication Date Title
JP6209684B2 (en) Voice tuning according to the situation
CN1701520B (en) Audio system with balance setting and method for controlling balance setting
US9813813B2 (en) Customization of a vehicle audio system
JP5053432B2 (en) Vehicle infotainment system with personalized content
US20080165988A1 (en) Audio blending
US20110116642A1 (en) Audio System with Portable Audio Enhancement Device
JP2003158789A (en) Vehicle-mounted sound system
JP2010064742A (en) Vehicle infortainment system having virtual personalization setting
CN102884797A (en) Electronic adapter unit for selectively modifying audio or video data for use with an output device
US9628894B2 (en) Audio entertainment system for a vehicle
JP2015513876A (en) Audio system
US7742610B1 (en) Automobile audiovisual system
WO2008070093A1 (en) Rear seat audio video systems and methods
US20060282861A1 (en) Audio/video expansion device and vehicular audio/video system using the same
US20050282600A1 (en) Car stereo for communicating with portable music player using wired connection
US20120163621A1 (en) Original equipment manufacturer (&#34;oem&#34;) integration amplifier
KR100758133B1 (en) Device and Method of Multimedia Regeneration Considering Properties and Location Information of User
KR100923964B1 (en) Audio System For Vehicle And Audio Signal Automatic Control Method Thereof
KR20190061581A (en) System and method of controlling audio for vehicle based bluetooth communication
US20220210593A1 (en) Combining prerecorded and live performances in a vehicle
EP3007356A1 (en) Vehicle audio system with configurable maximum volume output power
WO2005090126A1 (en) Audio device interface system
JP2000021144A (en) On-board audio apparatus
JP4417179B2 (en) Car audio system and headphones
CN103747404A (en) Surround-sound power-amplifier system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07853246

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07853246

Country of ref document: EP

Kind code of ref document: A1