EP1411517B1 - Kontrolleinheit und Verfahren zur Übertragung von Tonsignalen über ein optisches Netzwerk - Google Patents

Kontrolleinheit und Verfahren zur Übertragung von Tonsignalen über ein optisches Netzwerk Download PDF

Info

Publication number
EP1411517B1
EP1411517B1 EP03103849A EP03103849A EP1411517B1 EP 1411517 B1 EP1411517 B1 EP 1411517B1 EP 03103849 A EP03103849 A EP 03103849A EP 03103849 A EP03103849 A EP 03103849A EP 1411517 B1 EP1411517 B1 EP 1411517B1
Authority
EP
European Patent Office
Prior art keywords
audio data
data stream
optical network
audio
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP03103849A
Other languages
English (en)
French (fr)
Other versions
EP1411517A1 (de
Inventor
Dennis Stolyarov
Yuri Stotski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Publication of EP1411517A1 publication Critical patent/EP1411517A1/de
Application granted granted Critical
Publication of EP1411517B1 publication Critical patent/EP1411517B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/03Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for supply of electrical power to vehicle subsystems or for
    • B60R16/0315Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for supply of electrical power to vehicle subsystems or for using multiplexing techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0685Clock or time synchronisation in a node; Intranode synchronisation
    • H04J3/0691Synchronisation in a TDM node
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks

Definitions

  • This invention in general relates to the transmission of audio signals through a vehicle and, more particularly, to a control unit and method for receiving audio signals from the cabin of the vehicle and transmitting audio signals over an optical network to other audio processing units.
  • the automotive industry has seen a significant increase in the number of in-vehicle intelligent systems and services. These systems and services are intended to facilitate and improve communication between the user and the vehicle as well as the user and the outside world. For safety reasons, the industry is focused on providing hands-free features to people who use wireless communications in their vehicle. One area that needs improvement is the quality of the voice communication in the vehicle. Efforts to improve the quality of the voice communication has centered on introducing new audio signal processing algorithms, new distributed microphones, and new microphone arrays. To help shield transmitted audio signals from external interference, the industry has introduced an optical network according to a communication protocol known as the Media Oriented Systems Transport or MOST® . Further information about the MOST® optical network protocol may be found on the Internet at www.oasis.com.
  • MOST® Media Oriented Systems Transport
  • the MOST® optical network communication protocol has a limit of four independent audio streams (channels) that can be assigned to a control unit that transmits over the optical network. This is primarily due to limitations of current hardware interfaces. Current hardware interfaces sample at the same frequency rate supported by the MOST® optical network communication protocol which is 38kHz, 44.1 kHz, and 48 kHz. Even though the original design of the MOST® optical network communication protocol supports up to 15 synchronous 4-byte wide audio channels, the interface configuration to the optical network restricts the number of synchronous audio channels that can be assigned to an in-vehicle module. To improve overall user experience and support better quality voice communications, a need exists for additional microphones and microphone arrays. Simply adding additional transducers in known systems, however, will result in a significant cost increase and system complexity.
  • EP-A-1068997 describes audio communication using an optical network in a vehicle.
  • control unit for transmitting and distributing multiplexed audio data over an optical network.
  • the control unit comprises an audio sampler, a microprocessor, and an optical network interface.
  • the audio sampler samples a plurality of electrical signals from transducers and generates a plurality of raw audio data streams from the electrical signals.
  • the audio sampler is capable of sampling the electrical signals at a fraction of a frame synchronization rate (F s ) of the optical network.
  • the microprocessor has an audio processor function and a multiplexer function.
  • the audio processor function is capable of processing the raw audio data streams to generate a single processed audio data stream at or below the frame synchronization rate (F s ) of the optical network.
  • the multiplexer function is capable of generating a multiplexed audio data stream having a plurality of frames. Each frame has a plurality of time division multiplexed channels wherein a first channel within each frame is assigned to transmit the plurality of raw audio data streams and a second channel within each frame is assigned to transmit the processed audio data stream.
  • the multiprocessor is capable to multiplex the raw audio streams and the single processed audio data stream.
  • the optical network interface receives the multiplexed audio data stream from the microprocessor and generates an optical multiplexed audio data stream based on the multiplexed audio data stream from the microprocessor.
  • the frame synchronization rate (F s ) is provided by the optical network interface.
  • the fraction of the frame synchronization rate sampled by the audio sampler may include a variety of rate including one-sixth, one-fourth, one-third and one-half (F s /6, F s /4, F s /3, F s /2).
  • the control unit may include a wireless device interface for connecting to a wireless communication device.
  • the microprocessor is capable of receiving audio data from the wireless device interface and generating a downlink audio data stream at or below the frame synchronization rate (F s ) of the optical network.
  • the multiplexer function of the microprocessor would then be further capable of generating a multiplexed audio data stream having the plurality of frames wherein a third channel within each frame is assigned to transmit the downlink audio data stream.
  • the present invention may further include a procedure for inserting at least two bits within the data sample of the first channel to identify a time slot within the first channel that corresponds to the specific raw audio data stream.
  • the present invention may utilize a separate control channel that would include information to inform secondary audio processing units about the characteristics of the first channel that is transmitting the raw audio data streams.
  • the control unit for transmitting and distributing multiplexed audio data over an optical network from a first transducer and a second transducer.
  • the control unit comprises an audio sampler, a microprocessor and an optical network interface.
  • the audio sampler samples a first electrical signal from the first transducer and a second electrical signal from the second transducer.
  • the audio sampler is capable of sampling the first and second electrical signals to generate a first raw audio data stream and a second raw audio data stream.
  • the microprocessor has an audio processor function and a multiplexer function.
  • the audio processor function is capable of processing the first and second raw audio data streams to generate a single processed audio data stream.
  • the multiplexer function is capable of generating a multiplexed audio data stream having a first and second frame.
  • Each frame having a plurality of time division multiplexed channels wherein: a first sample of the first raw audio data stream is transmitted in a first channel during the first frame; a first sample of the processed audio data stream is transmitted in a second channel during the first frame; a first sample of the second raw audio data stream is transmitted in the first channel during the second frame, and a second sample of the processed audio data stream is transmitted in the second channel during the second frame.
  • the optical network interface receives the multiplexed audio data stream from the microprocessor and generates an optical multiplexed audio data stream based on the multiplexed audio stream from the microprocessor.
  • the system comprises a plurality of transducers, a control unit, and a secondary audio processing unit.
  • the plurality of transducers convert sound within a cabin of the vehicle to electrical signals.
  • the control unit has an audio sampler, a microprocessor and an optical network interface.
  • the audio sampler samples the electrical signals from the plurality of transducers and generates a plurality of raw audio data streams form the electrical signals.
  • the microprocessor has an audio processor function and a multiplexer function.
  • the audio processor function is capable of processing the raw audio data streams to generate a single processed audio data stream.
  • the multiplexer function is capable of generating a multiplexed audio data stream having a plurality of frames, each frame having a plurality of time division multiplexed channels wherein a first channel within each frame is assigned to transmit the plurality of raw audio data streams and a second channel within each frame is assigned to transmit the processed audio data stream.
  • the optical network interface receives the multiplexed audio data stream from the microprocessor and generates an optical multiplexed audio data stream based on the multiplexed audio data stream from the microprocessor.
  • the secondary audio processing unit is connected to the optical network to receive and process the optical multiplexed audio data stream.
  • a method for transmitting or distributing multiplexed audio data over an optical network comprises the steps of: sampling a plurality of electrical signals from transducers at a fraction of a frame synchronization rate (F s ) of the optical network and generating a plurality of raw audio data streams; processing the plurality of raw audio data streams at a frame synchronization rate (F s ) of the optical network; multiplexing the plurality of raw audio data streams with the single processed microprocessor audio data stream and generating a multiplexed audio data stream having a plurality of frames, each frame having a plurality of time division multiplexed channels wherein a first channel within each frame is assigned to transmit the plurality of raw audio data streams and a second channel within each frame is assigned to transmit the processed audio data stream; and converting the multiplexed audio data stream into an optical multiplexed audio data stream for transmission over the optical network.
  • the method may further comprise the steps of: receiving a downlink audio data stream from a wireless communication device; and multiplexing the plurality of raw audio data streams, the single processed microprocessor audio data stream, and the downlink audio data stream, and generating the multiplexed audio data stream having the plurality of frames wherein a third channel within each frame is assigned to transmit the downlink audio data stream.
  • the method may also comprise the steps of: generating a control data stream; and multiplexing the plurality of raw audio data streams, the single processed microprocessor audio data stream, and the control data stream, and generating the multiplexed audio data stream having the plurality of frames wherein a fourth channel within each frame is assigned to transmit the control data stream.
  • a vehicle 20 may have a system comprising a plurality of transducers 22A-D within transducer array 24, a control unit 26, and at least one secondary audio processing unit 28A-D.
  • the transducer arrays 24 are connected to the control unit 26 through wired connections 30.
  • the control unit 26 and the secondary audio processing units 28A-D are connected through an optical network 32.
  • the optical network 32 shown in FIG. 1 is configured in a ring topology. However, other topologies could be used such as a point-to-point topology and a star topology.
  • the optical network may operate according to known optical communication protocols such as the MOST® optical network communication protocol.
  • Each transducer array 24 may include a plurality of transducers 22A-D.
  • each transducer array 24 is shown to have four transducers 22A-D although a number of other configurations could be used.
  • the transducers 22A-D convert sounds in the cabin of the vehicle 20 to electrical signals.
  • the electrical signals may then be sent to the control unit 26 over wired connections 30.
  • the electrical signals over the wired connections 30 from the transducers 22A-D are analog signals that are sent to the control unit 26 for processing and routing to the secondary audio processing units 28A-D.
  • the transducers 22A-D may be co-located within the transducer array 24. Alternatively, a plurality of individual transducers could be distributed throughout the cabin of the vehicle 20.
  • the control unit 26 receives and processes the analog signals from the transducers 22A-D in the transducer arrays 24.
  • the received audio signals may be processed for transmission over a wireless communication link for hands-free voice communications during a voice call.
  • the control unit 26 may provide audio signals, over the optical network, to the secondary audio processing units 28A-D for further processing.
  • the present invention allows the secondary audio units 28A-D the flexibility to choose between receiving raw digital audio data from each transducer 22A-D or processed audio data generated from all the transducers 22A-D.
  • the secondary audio processing units 28A-D are connected to the optical network 32 and may represent known processing units that perform functions that require audio data from the cabin of the vehicle 20.
  • one type of secondary audio processing unit may be a unit that handles voice recognition commands.
  • a voice recognition unit identifies voice commands in the digital audio data and process the voice commands such as "place call, 888-555-1234" or "call office.”
  • Another type of secondary audio processing unit may be a speech-to-text unit that coverts voice from an occupant in the vehicle 20 to text messages for purposes of generating notes and other memoranda.
  • a further type of secondary audio processing unit may be in-vehicle wireless transceiver unit that receives digital audio data from the cabin of the vehicle 20 and processes the data for transmission over a wireless communication link in a hands-free environment.
  • Yet another type of secondary audio processing unit may be a vehicle audio system that receives downlink audio from the wireless communication device for broadcast over the vehicle speakers.
  • a control unit 26 may include a microprocessor 34, an audio sampler 36, a wireless device interface 38, and an optical network interface 40.
  • the control unit 26 has three inputs: audio signals 42A-42D received from the transducers 22A-22D; downlink audio signals or data 44 received from a wireless communication device 46; and optical data 48 from the optical network 32.
  • the transducers 22A-D may be centrally located within a transducer array 24 or individually distributed through the main cabin of the vehicle 20.
  • the wireless communication device 46 may be connected to the control unit 26 through a wired connection or through a short-range wireless connection enabled by techniques such as BluetoothTM.
  • the optical network 32 may provide optical data to the control unit 26 for a variety of purposes. For instance, the optical network 32 may provide the control unit 26 with data from the secondary processing units 28A-D for the purposes of broadcasting or transmitting audio to other devices.
  • the control unit 26 may also have the following outputs: optical audio data 50 that is time multiplexed for distribution over the optical network to the secondary audio processing units 28A-D; and uplink audio signals or data 52 for transmission over a wireless communication link by the wireless communication device 46.
  • optical audio data 50 that is time multiplexed for distribution over the optical network to the secondary audio processing units 28A-D
  • uplink audio signals or data 52 for transmission over a wireless communication link by the wireless communication device 46.
  • the output may need to conform to a particular communication protocol such as the MOST® optical network communication protocol. The formation of data for this output is described in more detail below.
  • the control unit 26 may be connected to the wireless communication device 46 through a wired connection or through a short-range wireless connection enabled by techniques such as BluetoothTM.
  • the audio sampler 36 receives the electrical audio signals 42A-D from the transducers 22A-D.
  • the transducers 22A-D will also be referred to as a first transducer 22A, a second transducer 22B, a third transducer 22C, and a fourth transducer 22D.
  • the audio sampler 36 may reside in the control unit 26 or, alternatively, may be a separate unit that provides a series of inputs to the control unit 26.
  • the audio sampler 36 takes samples of the electrical signals 42A-D and converts the electrical signals 42A-D to a format acceptable for further processing in the control unit 26.
  • the control unit 26 contains a microprocessor 34 with a digital signal processor controller
  • the electrical signals 42A-D are converted to raw digital audio signals 54A-D.
  • the audio sampler 36 may include components such as amplifiers and analog to digital (A/D) converters.
  • the sampling rate of the audio sampler 36 depends on a frame synchronization rate (F s ) of the optical network 32.
  • the microprocessor 34 may receive the F s from the optical network interface 40. The microprocessor 34 may then provide a sampling rate, based on the F s , to the audio sampler 36. In one embodiment, the sampling process may be at fraction of the F s accepted by the optical network 40. As will be explained in more detail below, depending on the optical network communication protocol, varying the sampling rate in this way provides the advantage of efficiently transmitting audio data from several transducers 22A-D in the cabin of the vehicle 20.
  • the frame synchronization rate (F s ) may be 38 kHz, 44.1 kHz or 48 kHz at a 32-bit resolution.
  • the sampling rate may be set in the microprocessor 34 to 11.025 kHz for each A/D converter. This can be done by a timing control 60 in the microprocessor 34.
  • the standard acceptable bit resolution for pulse code modulation (PCM) audio is typically a 16-bit resolution. Accordingly, in a preferred embodiment, the sampling would be done with a 16-bit resolution for each sample within each A/D converter. This would result in 16-bit linear PCM data signal.
  • the audio sampler outputs four streams of raw digital audio data 54A-D.
  • Each stream of raw digital audio data 54A-D is representative of the four analog signals provided by the transducers 22A-D.
  • the electrical signals 42A-D generated by the transducers 22A-D is a composite of sound components in the cabin of the vehicle 20.
  • the streams of raw digital audio data 54A-D contain this composite of sound components and are provided to the microprocessor 34 for further processing.
  • the microprocessor 34 in the control unit 26 has the capability of processing the streams of raw digital audio data 54A-D from the audio sampler 36.
  • a suitable microprocessor 34 may be a signal processor controller such as a Motorola MGT 5100.
  • the microprocessor 34 of the present invention preferably includes a number of functional blocks.
  • the microprocessor 34 has at least the following functional blocks: an audio processor 56, a multiplexer 58, a timing control 60, and a sample rate converter 61. These functional blocks may be microcoded signal processing steps that are programmed as operating instructions in the microprocessor 34.
  • the audio processor 56 may be used to generate a single stream of processed audio data 62 from the streams of raw digital audio data 54A-D.
  • the audio processor uses known algorithms and techniques of adaptive beam forming and adaptive noise reduction. These techniques are known to dynamically adapt the various streams of digital audio data 62 from the transducers 22A-D in the transducer array 24 so that the transducers' pickup pattern can be better directed to the speaker in the vehicle 20.
  • the audio processor 56 After processing the various streams of raw digital audio data 54A-D, the audio processor 56 generates a single stream of processed digital audio data 62.
  • the single stream of processed digital audio data 62 may be processed by the multiplexer 58 for transmission over the optical network 32 for use by the secondary audio processing units 28A-D or the wireless device interface 38 for transmission of uplink audio over a wireless communication link for use by the wireless communication device 46.
  • the present invention permits the secondary audio processing units 28A-D the choice of using the single stream of processed audio data 62 generated by the audio processor 56 in the microprocessor 34 or the individual streams of raw digital audio data 54A-D from each of the transducers 22A-D. As described in more below, this benefit is realized through the use of a specific method for multiplexing various streams of audio data over the optical network 32 through the multiplexer 58.
  • the sample rate converter 61 may be used by the microprocessor 34 to convert external audio samples at one sampling rate to audio samples at another sampling rate.
  • the raw downlink audio from a wireless device interface 38 may be at a sampling rate that is different from the frequency synchronization rate (F s ) of the optical network 32.
  • a typical sampling rate is about 8 kHz.
  • the sample rate converter 61 converts the incoming audio data from the wireless device interface 38 to a sampling rate that is based on the frequency synchronization rate (F s ) of the optical network 32.
  • the sample rate convert 61 may also convert the outgoing single stream of processed audio data 62 from a sampling rate that is based on the frequency synchronization rate (F s ) of the optical network 32 to a sampling rate acceptable for the wireless communication device 46. This will produce a stream of downlink audio data 64 that is used by the multiplexer 58 for transmission to the optical network interface 40 and then to the optical network 32.
  • F s frequency synchronization rate
  • the multiplexer 58 receives several sources of audio data that need to be processed for transmission over the optical network 32. For instance, in one embodiment, the multiplexer 58 receives at least six types of audio data: the four streams of raw audio digital data 54A-D from the transducers 22A-D; the single stream of processed audio data 62 from the audio processor 56; and the downlink audio data 64 from the wireless communication device 46. As mentioned above, the four streams of raw audio digital data 54A-D from the transducers 22A-D are preferably in a 16-bit linear PCM data signal that has a sampling rate of 11.025 kHz.
  • the four streams of raw audio digital data 54A-D may be needed by some secondary audio processing units 28A-D that prefer to use their own audio processing algorithms of adaptive beam forming and/or adaptive noise reduction.
  • One type of secondary audio processing unit 28A-D that is known to use its own audio processing algorithms is a voice recognition unit.
  • the single stream of processed audio data 62 may also be a 16-bit linear PCM data signal. However, the sampling rate in one embodiment is set at 44.1 kHz.
  • the single stream of processed audio data 62 may be needed by some secondary audio processing units 28A-D that do not have their own audio processing algorithms such as a speech-to-text unit.
  • the downlink audio data 64 from the rate converter 61 may further be a 16-bit linear PCM data signal having a sampling rate of 44.1 kHz.
  • the downlink audio data 64 may be needed by some secondary audio processing units 28A-D such as the vehicle audio system for broadcasting voice calls over the vehicle speakers.
  • the multiplexer 58 is configured to combine the four streams of raw audio data 54A-D, the single stream of processed audio data 62, and the downlink audio data 64.
  • the current hardware limitations only allow data to be multiplexed over four channels.
  • the present invention advantageously allows the control unit 26 to transmit the six audio data sources over the four existing channels.
  • the four streams of raw audio data 54A-D have a sampling rate that is a fraction of the frame synchronization rate (F s ) of the optical network 32.
  • the sampling rate is set at 11.025 kHz for each of the four streams of raw audio data 54A-D.
  • the single stream of processed audio data 62 and the downlink audio data 64 have a sampling rate at the frame synchronization rate (F s ) of the optical network, 44.1 kHz.
  • the multiplexer 58 can be configured in software within the microprocessor 34 to provide an audio interface to the optical network 32 through the optical network interface 40.
  • the optical network interface 40 is a MOST® Network Transceiver, OS8104, that can be obtained from Oasis SiliconSystems AG, Austin, Texas.
  • OS8104 MOST® Network Transceiver
  • the optical network interface 40 is capable of receiving and transmitting data between external applications and the MOST® network simultaneously.
  • the multiplexer 58 provides the optical network interface 40 with a stream of multiplexed audio data 66.
  • the optical network interface 40 converts the multiplexed audio data 66 from the microprocessor 34 to optical multiplexed audio data 50.
  • the multiplexer 58 performs a specific time division multiplexing operation to interleave the audio data received from the various audio sources and generate multiplexed audio data 66 to the optical network interface 40.
  • the time division multiplexing is done in a sequence as shown in FIGS. 3 and 4.
  • FIGS. 3 and 4 show a series of frames 70. For purposes of illustration, five sequential frames are shown: Frame 1, Frame 2, Frame 3, Frame 4, and Frame 5.
  • the size of each frame 70 is set by the frame synchronization rate (Fs) that is defined by an optical network communication protocol used by the optical network interface 40.
  • Fs frame synchronization rate
  • the multiplexer 58 configures eight channels within each frame.
  • the size of each channel is based on the frame synchronization rate (F s ).
  • F s frame synchronization rate
  • some of the channels may be representative of the left channel (L) in an audio stereo system (channels 1, 3, 5, 7).
  • Other channels may be representative of the right channel (R) in an audio stereo system (channels 2, 4, 6, 8).
  • the system only needs to transmit audio data in a mono format, only one of the left or right channels may be used.
  • the following description will be based on a channel allocation for a mono system using only the left channels (channel 1, 3, 5 and 7).
  • Each channel may be an audio PCM 16-bit channel that supports PCM data rates according to the frame synchronization rate (F s ) of the optical network and according to a fraction of the frame synchronization rate (such as Fs/2 or F s /4), depending on the assignment.
  • F s frame synchronization rate
  • F s fraction of the frame synchronization rate
  • a first of four left channels may be assigned to several audio sources to support data at a rate equal to a fraction of the frame synchronization rate (F s ). This will enable the first left channel to transmit data samples from more than one audio source.
  • the size of the fraction used to sample the sources for the first left channel will dictate how many audio sources may be transmitted over the fourth left channel.
  • the other three channels (channels 1, 3, and 5) may be assigned to separate sources to support data at a rate equal to the frame synchronization rate (F s ) of the optical network.
  • a first left channel (channel 7) is configured to transmit audio sampled at one-fourth the frame synchronization rate (F s /4) then up to four audio sources having this sampling rate may be time multiplexed on the fourth left channel.
  • the result is a channel assignment that has the four audio sources alternating on a time basis across four frames.
  • the first left channel may be assigned to transmit data for the four streams of raw digital audio data 54A-D from the first transducer 22A, the second transducer 22B, the third transducer 22C, and the fourth transducer 22D.
  • the first sample of the raw digital audio data 54A from the first transducer 22A may be transmitted during Frame 1.
  • the first sample of the raw digital audio data 54B from the second transducer 22B may be transmitted during Frame 2.
  • the first sample of the raw digital audio data 54C from the third transducer 22C (sample 1) may be transmitted during Frame 3.
  • the first sample of the raw digital audio data 54D from the fourth transducer 22D (sample 1) may be transmitted during Frame 4.
  • the second sample of the raw digital audio data 54A from the first transducer 22A may then be transmitted. This process continues in a time-multiplexed manner.
  • the multiplexing on a first left channel is different from other channels that may need to be configured at a higher sampling rate.
  • one of the other left channels (such as channel 5) may be configured to support PCM data rates at the frame synchronization rate (F s ).
  • channel 5 could be assigned to transmit the processed digital audio data 62 from the audio processor 56 in the microprocessor 34.
  • the first sample of the processed digital audio data 62 from microprocessor 34 (sample 1) may be transmitted during Frame 1.
  • the second sample of the processed digital audio data 62 from the microprocessor 34 (sample 2) may be transmitted during Frame 2.
  • the third sample of the processed digital audio data 62 from the microprocessor 34 (sample 3) may be transmitted during Frame 3.
  • the fourth sample of the processed digital audio data 62 from the microprocessor 34 may be transmitted during Frame 4.
  • the fifth sample of the processed digital audio data 62 from the microprocessor 34 may be transmitted during Frame 5. This process continues in a time-multiplexed manner.
  • a first left channel (channel 7) is configured to transmit audio sampled at one-half the frame synchronization rate (F s /2) then up to two audio sources having this sampling rate may be time multiplexed on the channel.
  • the first left channel (channel 7) is assigned to transmit the raw digital audio data 54A-B, then only two of the raw data streams of the transducers 22A-B could be transmitted over the first left channel.
  • the data from the other two transducers 22C-D could be assigned to another channel in a similar manner.
  • the transmission of the raw digital audio data 54A-B from the first transducer 22A and the second transducer 22B may proceed as follows:
  • the first sample of the raw digital audio data 54A from the first transducer 22A (sample 1) may be transmitted during Frame 1.
  • the first sample of the raw digital audio data 54B from the second transducer 22B (sample 1) may be transmitted during Frame 2.
  • the second sample of the raw digital audio data 54A from the first transducer 22A (sample 2) may be transmitted during Frame 3.
  • the second sample of the raw digital audio data 54B from the second transducer 22B may be transmitted during Frame 4.
  • the third sample of the raw digital audio data 54A from the first transducer 22A (sample 3) may then be transmitted. This process continues in a time-multiplexed manner.
  • FIG. 5 shows an embodiment of a buffer 72 that may be used by the multiplexer 58.
  • the buffer 72 stores data according to an assignment scheme that includes the transmission of audio data signals on the left channels in a mono format (channels 1, 3, 5, 7).
  • the data type assigned for each channel includes the following format.
  • One of the channels e.g ., Channel 1
  • Each sample may contain a 16-bit resolution at the frame synchronization rate (F s ).
  • Another channel e.g ., Channel 3 may be assigned for the transmission of the downlink audio data 64 from the wireless communication device 46.
  • Each sample may contain a 16-bit resolution at the frame synchronization rate (F s ).
  • Another channel e.g ., Channel 5 may be assigned for the transmission of the processed audio data 62 from the audio processor 56 in the microprocessor 34.
  • Each sample may contain a 16-bit resolution at the frame synchronization rate (F s ).
  • a further channel e.g ., Channel 7 may be assigned for the transmission of the four streams of raw digital audio data 54A-D from the transducers 22A-D.
  • Each sample may contain a 16-bit resolution but at a fraction of the frame synchronization rate (F s /4).
  • the present invention also includes a synchronization process for the optical multiplexed data 50 transmitted from the control unit 26 to the secondary audio processing units 28A-D.
  • a synchronization process for the optical multiplexed data 50 transmitted from the control unit 26 to the secondary audio processing units 28A-D In the process of transmitting audio signals over the optical network as described above, there is a need for the secondary audio processing units 28A-D to identify and keep track of the raw digital audio data 54A-D interleaved in the optical multiplexed audio data 50.
  • each of the 16-bit PCM data sample may contain a time slot identification number that corresponds to a particular transducer that has data within the channel at the particular time slot. This may be accomplished by assigning at least one least significant bit (LSB) in the 16-bit PCM data sample for the time slot identification as illustrated in FIG. 6.
  • the first bit 74 in the 16-bit PCM data sample is the most significant bit (MSB) representing the sign bit.
  • MSB most significant bit
  • the next thirteen bits 76 would then represent the PCM data having a maximum dynamic range of 78.36 dB.
  • the last two bits 78 would represent the time slot identification.
  • the identification bits could identify a specific transducer number.
  • the secondary audio processing units 28A-D could then use the sample structure and format to determine the time slot or transducer number when de-multiplexing the optical multiplexed data 50.
  • one of the left channels can be assigned as a control channel (as described above) and a portion of, or the entire width of, the control channel could be used to send information to the secondary audio processing units 28A-D.
  • Information contained in the control channel may include data to inform all secondary audio processing units 28A-D on the optical network 32 of the various characteristics and assignments made within the other audio channels.
  • the information for the control channel may be generated by the microprocessor in a control data stream. If the information within a control data stream cannot fit within a 16-bit sample, the content in the control channel may then be distributed over a number of synchronous audio data frames and reassembled by the secondary audio processing unit 28A-D.
  • the stream of control data 80 may include a start bits field 82, a used channel field 84, a series of channel information fields 86, and a CRC or checksum field 88.
  • the start bits field 82 may include a series of bits such as 12 bits that provide a unique pattern that identifies the start of the control channel.
  • the CRC or checksum field 88 can be used according to known means for verifying that the data received has not been corrupted.
  • the used channel field 84 may contain a series of bits that identifies the number of audio channels being used by the control unit 26. If there are eight channels in a frame as shown in FIGS. 3 and 4, then the used channel field 84 may be 8 bits wide where each bit represents one of the channels. The bit for a particular channel would indicate whether the channel is being used such as 0 for a channel not being used and 1 for a channel being used.
  • the channel information fields 86 may each contain a series of bits that identifies the information about the type of audio being transmitted over the optical network 32 by the control unit 26.
  • the bits could provide information such as: an audio channel identification; audio type identifier (e.g. , raw, processed, stereo, mono, left channel, right channel); channel transmission rate ( e.g ., 1/2 rate, 1/3 rate, 1/4 rate, 1/5 rate, 1/6 rate); whether the audio is from a single stream or audio from transducer array; transducer array identification; a number of audio transducer streams per channel; serial identification of the first transducer in the channel; microphone status ( e.g. , active, inactive).
  • audio type identifier e.g., raw, processed, stereo, mono, left channel, right channel
  • channel transmission rate e.g ., 1/2 rate, 1/3 rate, 1/4 rate, 1/5 rate, 1/6 rate
  • transducer array identification e.g., a number of audio transducer streams per channel
  • the control unit and method removes the dependency upon a single sample rate and limited number of independent audio channels that currently exist in today's systems. It allows simultaneous transmission of multiple independent microphones or other audio channels to various secondary audio processing units such as a voice recognition unit, a speech-to-text unit, an in-vehicle wireless transceiver, and an audio system for broadcasting audio over the vehicle speakers.
  • the control unit and method permits a single connection point to the optical network of all the distributed microphones, microphone arrays, and other audio based devices in the vehicle. This allows the audio channels from each unit to be transmitted simultaneously over the optical network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Optical Communication System (AREA)
  • Time-Division Multiplex Systems (AREA)

Claims (10)

  1. Steuereinheit (26) zur Verteilung gemultiplexter Audiodaten über ein optisches Netz (32), wobei die Steuereinheit (26) durch Folgendes gekennzeichnet ist:
    einen Audiosampler (36), welcher eine Vielzahl an elektrischen Signalen (42A-P) von Messwandlern (22A-D) abtastet und eine Vielzahl an unaufbereiteten Audiodatenströmen (54A-D) aus den elektrischen Signalen (42A-D) erzeugt, wobei der Audiosampler (36) in der Lage ist, die elektrischen Signale (42A-D) bei einem Bruchteil einer Frame-Synchronisationsrate (Fs) des optischen Netzes (32) abzutasten;
    einen Mikroprozessor (34) mit einer Audioprozessor-Funktion (56) sowie einer Multiplexer-Funktion (58), wobei die Audioprozessor-Funktion (56) in der Lage ist, die unaufbereiteten Audiodatenströme (62) zur Erzeugung eines einzelnen aufbereiteten Audiodatenstroms (62) bei der Frame-Synchronisationsrate (Fs) des optischen Netzes (32) zu verarbeiten oder aufzubereiten, die Multiplexer-Funktion (58) in der Lage ist, einen gemultiplexten Audiodatenstrom (66) mit einer Vielzahl an Frames (70) zu erzeugen, wobei jeder Frame (70) eine Vielzahl an zeitgemultiplexten Kanälen aufweist, wobei ein erster Kanal innerhalb eines ersten Frames zugeteilt ist, um die Vielzahl an unaufbereiteten Audiodatenströmen (54A-D) zu übertragen, sowie ein zweiter Kanal innerhalb eines jeden Frames zugeteilt ist, um den aufbereiteten Audiodatenstrom (62) zu übertragen, wobei die Muliplexer-Funktion in der Lage ist, die unaufbereiteten Audiodatenströme (54A-D) und den einzelnen aufbereiteten Audiodatenstrom (62) zu multiplexen; sowie eine Schnittstelle (40) des optischen Netzes, welche den gemultiplexten Audiodatenstrom (66) vom Mikroprozessor (34) empfängt und basierend auf dem gemultiplexten Audiodatenstrom (66) vom Mikroprozessor (34) einen optischen gemultiplexten Audiodatenstrom (50) erzeugt.
  2. Steuereinheit nach Anspruch 1, dadurch gekennzeichnet, dass die Frame-Synchronisationsrate (Fs) mit Hilfe der Schnittstelle (40) des optischen Netzes bereitgestellt wird.
  3. Steuereinehit nach Anspruch 2, dadurch gekennzeichnet, dass der Bruchteil der Frame-Synchronisationsrate des optischen Netzes (32), welche vom Audiosampler (36) abgetastet wird, ein Viertel (Fs/4) beträgt.
  4. Steuereinheit nach Anspruch 1, welche weiter eine drahtlose Geräteschnittstelle (38) zum Anschluss an eine drahtlose Kommunikationsvorrichtung (46) und einen Abtastratenwandler (61) aufweist, wobei der Mikroprozessor (34) in der Lage ist, Audiodaten von der drahtlosen Geräteschnittstelle (38) zu empfangen, und der Abtastratenwandler (61) die Abtastrate des Downlink-Audiodatenstroms umwandelt, um einen Downlink-Audiodatenstrom bei der Frame-Synchronisationsrate (Fs) des optischen Netzes (32) zu erzeugen, wobei der gemultiplexte Audiodatenstrom die Vielzahl an Frames aufweist, wobei ein dritter Kanal innerhalb eines jeden Frames zur Übertragung des Downlink-Audiodatenstroms zugeteilt ist.
  5. Steuereinheit nach Anspruch 1, dadurch gekennzeichnet, dass der erste Kanal innerhalb eines jeden Frames ein Datensample enthält, wobei das Datensample mindestens zwei Bits aufweist, welche einen Zeitschlitz innerhalb des ersten Kanals identifizieren, der den unaufbereiteten Audiodatenströmen (54A-D) entspricht.
  6. Steuereinheit nach Anspruch 1, dadurch gekennzeichnet, dass die Multiplexer-Funktion (58) des Mikroprozessors (34) weiter in der Lage ist, einen gemultiplexten Audiodatenstrom (66) mit einer Vielzahl an Frames (70) zu erzeugen, wobei mindestens ein Kanal innerhalb eines jeden Frames zur Übertragung von Steuerdaten zugeteilt ist.
  7. Steuereinheit nach Anspruch 6, dadurch gekennzeichnet, dass die Steuerdaten Informationen einschließen, um eine sekundäre Audioverarbeitungseinheit über die Eigenschaften oder Merkmale des ersten Kanals innerhalb eines jeden Frames zu informieren.
  8. Verfahren zur Verteilung gemultiplexter Audiodaten über ein optisches Netz (32), wobei das Verfahren durch die folgenden Schritte gekennzeichnet ist:
    Abtasten (36) einer Vielzahl an elektrischen Signalen (42A-D) von Messwandlern (22A-D) bei einem Bruchteil einer Frame-Synchronisationsrate (Fs) des optischen Netzes (32) und Erzeugen einer Vielzahl an unaufbereiteten Audiodatenströmen (54A-D);
    Aufbereiten (56) der Vielzahl an unaufbereiteten Audiodatenströmen (54A-D) und Erzeugung eines einzelnen aufbereiteten Audiodatenstroms (62) bei einer Frame-Synchronisationsrate (Fs) des optischen Netzes;
    Multiplexen der Vielzahl an unaufbereiteten Audiodatenströmen (54A-D) mit dem einzelnen aufbereiteten Audiodatenstrom (62) und Erzeugen eines gemultiplexten Audiodatenstroms (66) mit einer Vielzahl an Frames (70), wobei jeder Frame (70) eine Vielzahl an zeitgemultiplexten Kanälen aufweist, wobei ein ersten Kanal innerhalb jedes Frames zur Übertragung der Vielzahl an unaufbereiteten Audiodatenströmen (54A-D) und ein zweiter Kanal innerhalb jedes Frames zur Übertragung des aufbereiteten Audiodatenstroms (62) zugeteilt wird; und
    Umwandeln (40) des gemultiplexten Audiodatenstroms (66) in einen optischen gemultiplexten Datenstrom (50) zur Übertragung über das optische Netz (32).
  9. Verfahren nach Anspruch 8, dadurch gekennzeichnet, dass der Bruchteil der Frame-Synchronisationsrate (Fs) des optischen Netzes (32) ein Viertel (Fs/4) beträgt.
  10. Verfahren nach Anspruch 8, dadurch gekennzeichnet, dass der Verarbeitungs- oder Aufbereitungsschritt die Umwandlung der Abtastrate des unaufbereiteten Downlink-Audiodatenstroms einschließt, und weiter die folgenden Schritte aufweist:
    Empfangen eines Downlink-Audiodatenstroms von einer drahtlosen Kommunikationsvorrichtung (38); und
    Multiplexen der Vielzahl an unaufbereiteten Audiodatenströmen, des einzelnen aufbereiteten Audiodatenstroms, und des Downlink-Audiodatenstroms, und Erzeugen des gemultiplexten Audiodatenstroms, welcher die Vielzahl an Frames aufweist, wobei ein dritter Kanal innerhalb eines jeden Frames zur Übertragung des Downlink-Audiodatenstroms zugeteilt wird.
EP03103849A 2002-10-18 2003-10-17 Kontrolleinheit und Verfahren zur Übertragung von Tonsignalen über ein optisches Netzwerk Expired - Lifetime EP1411517B1 (de)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
1999-05-06
US41936702P 2002-10-18 2002-10-18
US419367P 2002-10-18
US10/677,910 US7519085B2 (en) 2002-10-18 2003-10-02 Control unit for transmitting audio signals over an optical network and methods of doing the same

Publications (2)

Publication Number Publication Date
EP1411517A1 EP1411517A1 (de) 2004-04-21
EP1411517B1 true EP1411517B1 (de) 2006-06-21

Family

ID=32073524

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03103849A Expired - Lifetime EP1411517B1 (de) 2002-10-18 2003-10-17 Kontrolleinheit und Verfahren zur Übertragung von Tonsignalen über ein optisches Netzwerk

Country Status (4)

Country Link
US (1) US7519085B2 (de)
EP (1) EP1411517B1 (de)
AT (1) ATE331285T1 (de)
DE (1) DE60306293T2 (de)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005123892A (ja) * 2003-10-16 2005-05-12 Matsushita Electric Ind Co Ltd データ伝送装置およびデータ伝送システム、並びにその初期化方法
US7801283B2 (en) * 2003-12-22 2010-09-21 Lear Corporation Method of operating vehicular, hands-free telephone system
US7567908B2 (en) 2004-01-13 2009-07-28 International Business Machines Corporation Differential dynamic content delivery with text display in dependence upon simultaneous speech
US7539889B2 (en) * 2005-12-30 2009-05-26 Avega Systems Pty Ltd Media data synchronization in a wireless network
US7454658B1 (en) * 2006-02-10 2008-11-18 Xilinx, Inc. In-system signal analysis using a programmable logic device
EP1853005A1 (de) * 2006-05-01 2007-11-07 Anagram Technologies SA Verfahren und Vorrichtung zur Ubetragung von Information über einem Netz zu Bestimmungsortvorrichtungen
US20080233892A1 (en) * 2007-03-19 2008-09-25 Bojko Marholev Method and system for an integrated vco and local oscillator architecture for an integrated fm transmitter and fm receiver
DE102010034237A1 (de) 2010-08-07 2011-10-06 Daimler Ag Mikrofonsystem und Verfahren zur Generierung einer Mikrofon-Richtwirkung in Bezug auf eine akustische Quelle innerhalb eines Kraftfahrzeugs
US9313336B2 (en) 2011-07-21 2016-04-12 Nuance Communications, Inc. Systems and methods for processing audio signals captured using microphones of multiple devices
US9601117B1 (en) * 2011-11-30 2017-03-21 West Corporation Method and apparatus of processing user data of a multi-speaker conference call
US9344811B2 (en) * 2012-10-31 2016-05-17 Vocalzoom Systems Ltd. System and method for detection of speech related acoustic signals by using a laser microphone
KR101757531B1 (ko) * 2015-05-07 2017-07-13 전자부품연구원 차량용 광 네트워크 기반 오디오 시스템 및 그의 브로드캐스팅 방법

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0418396B1 (de) 1989-03-16 1998-06-03 Fujitsu Limited Video/audiomultiplexübertragungssystem
US5388124A (en) * 1992-06-12 1995-02-07 University Of Maryland Precoding scheme for transmitting data using optimally-shaped constellations over intersymbol-interference channels
US5319707A (en) * 1992-11-02 1994-06-07 Scientific Atlanta System and method for multiplexing a plurality of digital program services for transmission to remote locations
US6009305A (en) 1993-12-28 1999-12-28 Hitachi Denshi Kabushiki Kaisha Digital video signal multiplex transmission system
US5600365A (en) 1994-01-28 1997-02-04 Sony Corporation Multiple audio and video signal providing apparatus
US6169749B1 (en) 1997-12-18 2001-01-02 Alcatel Usa Sourcing L.P. Method of sequencing time division multiplex (TDM) cells in a synchronous optical network (sonet) frame
AU3279699A (en) 1998-02-25 1999-09-15 Auckland Uniservices Limited System and method for demultiplexing in optical communication systems
JP2000287286A (ja) 1999-03-31 2000-10-13 Kenwood Corp 光マイクロフォン装置
EP1068997B1 (de) 1999-07-14 2006-03-15 Ford Motor Company Informations-, Kommunikations- und Unterhaltungssystem für ein Fahrzeug
US6356550B1 (en) 1999-07-30 2002-03-12 Mayan Networks Corporation Flexible time division multiplexed bus using sonet formatting
WO2001031972A1 (en) 1999-10-22 2001-05-03 Andrea Electronics Corporation System and method for adaptive interference canceling
JP2001274772A (ja) 2000-03-24 2001-10-05 Kddi Corp Tdm光多重装置、tdm光分離装置、wdm/tdm変換装置及びtdm/wdm変換装置
JP4502500B2 (ja) 2000-12-06 2010-07-14 富士通株式会社 光時分割多重化信号処理装置および処理方法、光時分割多重化信号受信装置
EP1223696A3 (de) 2001-01-12 2003-12-17 Matsushita Electric Industrial Co., Ltd. System zur Übertragung von digitalen Audiodaten nach dem MOST-Verfahren
US7076204B2 (en) * 2001-10-30 2006-07-11 Unwired Technology Llc Multiple channel wireless communication system

Also Published As

Publication number Publication date
US20040076435A1 (en) 2004-04-22
ATE331285T1 (de) 2006-07-15
EP1411517A1 (de) 2004-04-21
US7519085B2 (en) 2009-04-14
DE60306293T2 (de) 2007-05-10
DE60306293D1 (de) 2006-08-03

Similar Documents

Publication Publication Date Title
EP1411517B1 (de) Kontrolleinheit und Verfahren zur Übertragung von Tonsignalen über ein optisches Netzwerk
US20060235552A1 (en) Method and system for media content data distribution and consumption
JPH0435941B2 (de)
JPH04276930A (ja) デイジーチェーン・マルチプレクサ
US20060217061A1 (en) RF amplification system and method
EP1646215A1 (de) Mobiles Stereoterminal und Verfahren zum Anrufen mit einem Stereomobiltelefon
US5038342A (en) TDM/FDM communication system supporting both TDM and FDM-only communication units
CA2422086A1 (en) Networked sound masking system with centralized sound masking generation
HK1035820A1 (en) Method and apparatus of supporting an audio protocol in a network.
JPH0752867B2 (ja) 多チヤンネルpcm音楽放送システム
US11175882B2 (en) Portable system for processing audio signals from multiple sources
CN115348241A (zh) 一种麦克风级联方法
KR100564068B1 (ko) 전송 방법과 그 방법을 이용하는 통신 시스템
JP2004236283A (ja) オーディオ並びにデータ多重送信装置及び無線オーディオシステム
US20020116198A1 (en) Method for transmitting synchronization data in audio and/or video processing systems
CN1290377C (zh) 可提供语音及数据多路转换传输的无线音响系统
US20080123563A1 (en) Conference Voice Station And Conference System
JPS61239736A (ja) ビツトスチ−ル方式
JP2023547200A (ja) Dectベース高密度無線音声システム
US10445053B2 (en) Processing audio signals
KR100325637B1 (ko) 이기종 이동통신 시스템의 서비스 방법 및 그 장치
JP2002290339A (ja) 音声信号伝送方法及び音声信号伝送システム
JP2907661B2 (ja) デジタル多重伝送装置
JP2000188610A (ja) 音声信号のパケット伝送方式
JPH04145786A (ja) 映像音声伝送装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

17P Request for examination filed

Effective date: 20041014

17Q First examination report despatched

Effective date: 20041112

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20060621

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60306293

Country of ref document: DE

Date of ref document: 20060803

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060921

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060921

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061031

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20061031

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061121

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20070322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060922

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060921

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061222

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20071017

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20121031

Year of fee payment: 10

Ref country code: DE

Payment date: 20121031

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20121019

Year of fee payment: 10

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20131017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131017

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60306293

Country of ref document: DE

Effective date: 20140501

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140501

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131031