EP2842123B1 - Kommunikationssystem für die kombinierte spracherkennung, freihand-kommunikation und fahrzeug-innenkommunikation - Google Patents

Kommunikationssystem für die kombinierte spracherkennung, freihand-kommunikation und fahrzeug-innenkommunikation Download PDF

Info

Publication number
EP2842123B1
EP2842123B1 EP12723791.5A EP12723791A EP2842123B1 EP 2842123 B1 EP2842123 B1 EP 2842123B1 EP 12723791 A EP12723791 A EP 12723791A EP 2842123 B1 EP2842123 B1 EP 2842123B1
Authority
EP
European Patent Office
Prior art keywords
speech
application
system users
input signals
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12723791.5A
Other languages
English (en)
French (fr)
Other versions
EP2842123A1 (de
Inventor
Markus Buck
Tim Haulick
Timo Matheja
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
Nuance Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuance Communications Inc filed Critical Nuance Communications Inc
Publication of EP2842123A1 publication Critical patent/EP2842123A1/de
Application granted granted Critical
Publication of EP2842123B1 publication Critical patent/EP2842123B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6075Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/009Signal processing in [PA] systems to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems

Definitions

  • the invention relates to speech signal processing, particularly in an automobile.
  • multiple speakers can be supported, for example, by seat-dedicated microphones, microphone arrays and/or multiple loudspeakers within the vehicle.
  • HF telephone mode several passengers in the vehicle may take part in a conference call where enhancement of the speech signals for the different speakers may improve the signal quality.
  • an adaptive mixer may only pass the audio signal of the currently active speaker through to the far-end listener.
  • VR mode only one speaker (e.g., the driver) normally is supposed to operate the system by voice, whereas other persons are considered as interfering speakers.
  • the speech signal can also be selectively extracted by multi-channel processing.
  • the operating mode for ICC has to make sure that the persons in the car (or room) can understand among themselves.
  • AUDINO D ET AL "Wireless Audio Communication Network for In-Vehicle Access of Infotainment Services in motorcycles", 2006 IEEE 17th International Symposium On Personal, Indoor And Mobile Radio Communications, IEEE, 1 September 2006, pages 1-5 , discloses a wireless audio communication system that makes available to the driver and the passenger of a motorcycle a wide range of audio services such as an intercom, listening to FM radio or navigational messages from a GPS navigator, and placing phone calls by means of a cellular phone.
  • the system uses a multi-point Bluetooth audio network formed and managed by the audio communication unit placed on the vehicle. This unit acts as an audio access-point to which the users, equipped with standard Bluetooth headsets, independently connect, register and access the available audio services.
  • EP 1748636 A1 discloses an indoor communication system mounted in a passenger compartment, comprising at least one microphone, at least one loudspeaker and a signal processing means, the signal processing means having an input for receiving an acoustic input signal from the at least one microphone and an output for providing an acoustic output signal based on the acoustic input signal to at least one of the at least one loudspeaker and a mobile audio device comprising a microphone and/or a loudspeaker, wherein the signal processing means comprises an input for receiving an acoustic input signal from the microphone of the mobile audio device and an output for providing an acoustic output signal based on the acoustic input signal to the loudspeaker of the mobile audio device.
  • WO 2008/061205 A2 discloses an integrated vehicle communication system comprising a module for assisting with occupant-to-occupant communications in a vehicle.
  • the module includes a first interface configured to receive a first voice signal from a microphone, and a second interface configured to send a second signal to the vehicle's audio system.
  • the module yet further includes a processing system configured to receive a third signal representing the voice from the first interface, the processing system configured to provide a fourth signal representing the voice to the second interface.
  • the processing system is further configured to switch to a communication assistance mode of operation in which the processing system causes the voice to be output from at least one audio output device mounted in the rear half of the vehicle at a significantly higher volume relative to voice output from at least one audio output device mounted in the front half of the vehicle.
  • the present invention provides a multi-mode speech communication system as defined in claim 1.
  • the invention provides a computer-implemented method as defined in claim 8, and a computer program product as defined in claim 15.
  • the speech applications may include one or more of a hands free telephone application, an in-car communication system, and an automatic speech recognition (ASR) application.
  • the system may operate multiple different speech applications in parallel. Dynamically controlling the processing of the microphone input signals and the loudspeaker output signals may be performed by a control module in response to control mechanism inputs from the system users.
  • the speech service compartment may be the passenger compartment of an automobile.
  • Embodiments of the present invention use multiple different operating modes for multiple different speech applications - for example, any of VR, HF, and ICC - at the same time using the same microphones and loudspeakers.
  • State-of-the-art multi-microphone methods like beamforming, cross-talk compensation, or blind source separation allow the speech signals of simultaneously speaking persons to be selectively extracted.
  • a corresponding enhanced input signal can be calculated where each enhanced input signal contains only the signals of the corresponding speakers and noise is removed and the signals of all the other speakers are canceled.
  • a generic signal mixer can group specific sets of these enhanced input signals to generate application output signals that are further used for one or more of the different speech applications in one or more of the different operation modes.
  • FIG. 1 shows a vehicle speech communication system 100 which may include hardware and/or software which may run on one or more computer processor devices.
  • a speech service compartment such as a passenger compartment 101 in an automobile holds multiple passengers who are system users 105.
  • the passenger compartment 101 also includes multiple input microphones 102 that develop microphone input signals from the system users 105 to the speech communication system 100.
  • Multiple output loudspeakers 103 develop loudspeaker output signals from the speech communication system 100 to the system users 105.
  • a signal processing module 104 is in communication with multiple speech applications 108 and an in-car-communication (ICC) system 109, and includes an input processing module 106 and an output processing module 107.
  • the input processing module 106 processes the microphone input signals to produce user input signals to the speech applications 108 and the ICC system 109 for each system user 105.
  • the user input signals are enhanced to maximize speech from that system user 105 and to minimize other audio sources including speech from other system users 105.
  • the output processing module 107 processes application output communications from the speech applications 108 and the ICC system 109 to produce loudspeaker output signals to output loudspeaker 103 for each system user 105 such that for each different speech processing application 108, the loudspeaker output signals are directed only to system users 105 currently active in that application.
  • a control module 110 e.g., processor, microprocessor, microcontroller, etc.
  • the signal processing module 104 dynamically controls the processing of the microphone input signals and the loudspeaker output signals to respond to changes in system users 105 currently active in each application.
  • Audio data processed by the speech communication system 100 may be stored in a synchronous or asynchronous memory through one or more bi-directional and/or one or more uni-directional data buses.
  • Such data storage memory may be a Read Only Memory (ROM), a Random Access Memory (RAM), or any other type of volatile and/or nonvolatile storage space.
  • the signal processing module 104 may communicate through one or more input/output interfaces via wired or wireless connections using digital or analog audio data.
  • the speech communication system 100 may use one or more processing and communication protocols, for example, 1850VPW, J1850PWM, ISO, ISO9141-2, ISO14230, CAN High Speed CAN, MOST, LIN, IDB-1394, IDB-C, D2B, Bluetooth, TTCAN, TTP, and/or FlexRay.
  • the speech communication system 100 may communicate with external elements via a separate transmitter and/or receiver or a transceiver in half-duplex or full-duplex using coded and/or uncoded data representing audio data or control data.
  • the speech communication system 100 may use one or more wireless protocols, such as Bluetooth, 802.1b, 802.11j, 802.11x, Zigbee, Ultra Wide Band, Mobile FI, Wireless Local Area Network (WLAN), and/or Infrared Data Transmissions which may include the Infrared Data Association IrDa 1.0 standard which may provide high transfer rates, or the Infrared Data Association IrDa 1.1 standard which may provide higher transfer rates.
  • wireless protocols such as Bluetooth, 802.1b, 802.11j, 802.11x, Zigbee, Ultra Wide Band, Mobile FI, Wireless Local Area Network (WLAN), and/or Infrared Data Transmissions which may include the Infrared Data Association IrDa 1.0 standard which may provide high transfer rates, or the Infrared Data Association IrDa 1.1 standard which may provide higher transfer rates.
  • FIG. 2 shows various steps in operating a speech communication system 100 according to an embodiment of the present invention that uses use multiple different operating modes for multiple system users 105 and multiple different speech applications 108 at the same time using multiple microphones 102 and multiple loudspeakers 103.
  • multi-user processing is active, step 201, all passengers are participants of the conference call and the ICC system is active for all seats.
  • all input microphones 102 and all output loudspeakers 103 are available for all system users 105.
  • Speech inputs from input microphones 102 are processed by the input processing module 106 of the signal processor 104, where the control module 110 directs them to either the current phone call in the speech applications block 108 or the ICC system block 110.
  • the control module 110 also directs the operation of the output processing module 107 to direct audio signal outputs to the output loudspeakers 103 of the individual passenger system users 105.
  • the driver presses a push-to-talk button to start a speech dialog for entering a new destination address in the navigation system, step 202.
  • the signal processing module 104 switches the operating mode for the driver to no longer process his/her voice for the conference call and the ICC system, step 203.
  • input processing module 106 enhances the microphone signal for the driver to remove all the other speaker voices and compartment noises and now the driver's voice is exclusively extracted for VR in a speech dialog to control the navigation system, step 204.
  • the system continues to process the conference call and ICC system for the other system users 105.
  • the prompt output is optimized exclusively for the driver, whereas the signals received from HF and ICC are not played back for the driver's loudspeakers.
  • the output module 107 controls the output loudspeakers 103 to shape audio outputs to the driver that relate to the speech dialog for controlling the navigation system.
  • the driver is placed back into the conference call and his speech is amplified again by the ICC system for the other passengers, step 206.
  • FIG. 3 shows the signal processing module 104 in greater detail for the specific case where a conference call 308 is the sole external speech application that currently active.
  • the internal ICC system 109 also is available in parallel.
  • the signals from all the input microphones 102 are directly available to the ICC system 109 without significant signal enhancement, and also are provided to a speech enhancement module 301 for signal enhancement for the currently active external speech applications, in this case, the conference call 308.
  • the control module 110 may or may not direct the signal enhancement module 301 to perform noise reduction, acoustic echo cancellation, etc.
  • the signal enhancement module 301 In the specific case shown in Figure 3 , where the conference call 308 is the sole external speech application, there is no need for the signal enhancement module 301 to eliminate channel cross-talk components since the control module 110 indicates a single mixed output signal based on all the input microphones 102. Thus, in this case, no separation is needed for the signal components of the individual speaker users 105. For other specific speech applications, the specific signal processing by the speech enhancement module 301 will be applied as is most suitable.
  • the enhanced signal outputs of the signal enhancement module 301 undergo adaptive mixing 302 reflecting different background noises, different speech signal levels, exploiting diversity effects and so on.
  • output side speech enhancement module 303 may provide specific output signal enhancement and optimization including without limitation bandwidth extension, noise reduction for the external speech applications, in this case, the conference call 308.
  • the enhanced output of the output signal enhancement module 303 is adaptively mixed by a receiver mixer matrix 304 to produce multiple output signals for the various output loudspeakers 103 for each currently active speech application, in this case, the conference call 308.
  • An ICC up-mix matrix 305 combines the speech application loudspeaker signals from the receiver mixer matrix 304 with speaker signals from the ICC system 109 to produce the individual loudspeaker signals to the individual output loudspeakers 103.
  • the individual microphone and loudspeaker signals are mapped and controlled for each different speech application, which may be operating in parallel and/or sequentially. And at any one time, the number of microphone channels that are processed does not have to match the number of currently active speech applications or system users. And similarly, at any one time, the number of loudspeaker output channels that are processed and developed does not have to match the number of received signals from the different speech applications or for the total number of system users.
  • the bold flow lines in Figure 4 shows how the generic vehicle speech communication system 100 shown in Fig. 3 dynamically switches its operation when one of the system users 405 in the conference call 308 uses a push-to-talk button or other control mechanism to initiate a speech dialog 401 with another external speech application, such as for operating a navigational system to enter a destination address.
  • the non-bold flow lines in Fig. 4 show that the operation of the generic vehicle speech communication system 100 continues as before for the other system users 105 in the conference call 308 and using the ICC system 110. But the up-mixed enhanced input signals from the adaptive signal mixer 302 to the conference call 308 are dynamically switched to omit speech from the input microphone 102 of the switching user 405.
  • the speech enhancement module 301 dynamically switches to enhance the microphone signal from the switching user 405 to optimize it for use in the external speech dialog 401.
  • the receive side speech enhancement module 303 now receives inputs from both currently active external speech applications, conference call 308 and speech dialog 401. And for each different such active external speech application, the receive side speech enhancement module 303 applies appropriate enhancement processing for that application.
  • the loudspeaker output signals for the speech dialog 401 from the receiver mixer matrix 304 and ICC up-mix matrix 305 are directed to one or more output loudspeakers 103 for the speech dialog user 405, while the loudspeaker output signals for conference call 308 are directed to one or more output loudspeakers 103 for the conference call system users 105.
  • the operation of the generic vehicle speech communication system 100 dynamically switches to add and remove speakers from the different speech applications, which may be currently active in parallel with each other and/or sequentially.
  • the specific signal processing can change dynamically during operation to optimize the signal processing for each different speech application.
  • Embodiments of the invention may be implemented in whole or in part in any conventional computer programming language such as VHDL, SystemC, Verilog, ASM, etc.
  • Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
  • Embodiments can be implemented in whole or in part as a computer program product for use with a computer system.
  • Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
  • the medium may be either a tangible medium (e.g ., optical or analog communications lines) or a medium implemented with wireless techniques (e.g ., microwave, infrared or other transmission techniques).
  • the series of computer instructions embodies all or part of the functionality previously described herein with respect to the system.
  • Such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g ., shrink wrapped software), preloaded with a computer system (e.g ., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g ., the Internet or World Wide Web).
  • printed or electronic documentation e.g ., shrink wrapped software
  • preloaded with a computer system e.g ., on system ROM or fixed disk
  • server or electronic bulletin board e.g ., the Internet or World Wide Web
  • embodiments of the invention may be implemented as a combination of both software (e.g ., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)

Claims (15)

  1. Multimodales Sprachkommunikationssystem (100) mit einer Vielzahl von verschiedenen Betriebsarten, wobei jede Betriebsart einer aus einer Vielzahl von verschiedenen Sprachanwendungen (108; 308, 401) zugeordnet ist, das System umfassend:
    ein Sprachdienstraum (101), der eine Vielzahl von Systembenutzern (105) enthält;
    eine Vielzahl von Eingangsmikrofonen (102) im Dienstraum, die Sprache von den Systembenutzern als eine Eingabe empfangen;
    eine Vielzahl von Ausgangslautsprechern (103) im Dienstraum, die den Systembenutzern Audioausgaben bereitstellen;
    ein Signalverarbeitungsmodul (104) in Kommunikation mit den Sprachanwendungen und beinhaltend:
    a. ein Eingabeverarbeitungsmodul (106), das von der Vielfalt von Eingangsmikrofonen empfangene Mikrofoneingangssignale verarbeitet, um einen Satz von Eingangssignalen von Benutzern für jede Sprachanwendung zu erzeugen, die sich auf derzeit aktive Systembenutzer dieser Sprachanwendung beschränken, und
    b. ein Ausgabeverarbeitungsmodul (107), das Anwendungsausgangskommunikationen von den Sprachanwendungen verarbeitet, um Lautsprecherausgangssignale zu erzeugen, um den Systembenutzern die Audioausgaben bereitzustellen, wobei für jede unterschiedliche Sprachanwendung die Lautsprecherausgangssignale nur an Systembenutzer gerichtet werden, die in dieser Sprachanwendung derzeit aktiv sind;
    wobei das Signalverarbeitungsmodul (104) dynamisch die Verarbeitung der Mikrofoneingangssignale und der Lautsprecherausgangssignale steuert, um auf Änderungen bei derzeit aktiven Systembenutzern für jede Sprachanwendung zu reagieren, dadurch gekennzeichnet, dass das Signalverarbeitungsmodul in einer Vielzahl der verschiedenen Betriebsarten gleichzeitig arbeiten kann, und das Signalverarbeitungsmodul so ausgebildet ist, dass es Benutzereingangssignale für jede Sprachanwendung verstärkt, um Sprache vom Systembenutzer dieser Sprachanwendung zu maximieren und andere Audioquellen als die der Sprachanwendung zugeordneten zu minimieren.
  2. System nach Anspruch 1, wobei die Sprachanwendungen eine Freisprechtelefonanwendung beinhalten.
  3. System nach Anspruch 1, wobei die Sprachanwendungen ein im Fahrzeug mitgeführtes Kommunikationssystem beinhalten.
  4. System nach Anspruch 1, wobei die Sprachanwendungen eine Anwendung zur automatischen Spracherkennung (ASR) beinhalten.
  5. System nach Anspruch 1, wobei das System eine Vielzahl von verschiedenen Sprachanwendungen parallel betreibt.
  6. System nach Anspruch 1, wobei der Sprachdienstraum (101) der Fahrgastraum eines Automobils ist.
  7. System nach Anspruch 1, wobei das Signalverarbeitungsmodul (104) des Weiteren ein Steuermodul (110) zur dynamischen Steuerung der Verarbeitung der Mikrofoneingangssignale und der Lautsprecherausgangssignale in Reaktion auf die Steuermechanismus-Eingaben vom Systembenutzer umfasst.
  8. Computerimplementiertes Verfahren unter Verwendung eines oder mehrerer Computerprozesse für multimodale Sprachkommunikation unter Verwendung einer Vielzahl von verschiedenen Betriebsarten, wobei jede Betriebsart einer aus einer Vielzahl von verschiedenen Sprachanwendungen (108; 308, 401) zugeordnet ist, das Verfahren umfassend:
    Empfangen von Sprache von einer Vielzahl von Systembenutzern (105) in einem Dienstraum (101) an einer Vielzahl von Mikrofonen (102), um eine Vielzahl von Mikrofoneingangssignalen zu erzeugen;
    Verarbeiten des Mikrofoneingangssignals mit einem Eingabeverabeitungsmodul (106), um einen Satz von Eingangssignalen von Benutzern für jede Sprachanwendung zu erzeugen, die sich auf derzeit aktive Systembenutzer für diese Sprachanwendung beschränken; und
    Verarbeiten von Anwendungsausgangskommunikationen von den Sprachanwendungen mit einem Ausgabeverarbeitungsmodul (107), um eine Vielzahl von Lautsprecherausgangssignalen zu erzeugen, die einer Vielzahl von Ausgangslautsprechern (103) im Dienstraum bereitgestellt werden, wobei die Lautsprecherausgangssignale für jede unterschiedliche Sprachanwendung nur an Systembenutzer gerichtet werden, die derzeit in dieser Sprachanwendung aktiv sind;
    wobei die Verarbeitung der Mikrofoneingangssignale und die Lautsprecherausgangssignale dynamisch gesteuert wird, um auf Änderungen bei den derzeit aktiven Systembenutzern in jeder Sprachanwendung zu reagieren, dadurch gekennzeichnet, dass die Verarbeitungsmodule in einer Vielzahl der unterschiedlichen Betriebsarten gleichzeitig arbeiten kann, und das Verarbeiten der Mikrofoneingangssignale das Verstärken von Benutzereingangssignalen für jede Sprachanwendung umfasst, um Sprache vom Systembenutzer dieser Sprachanwendung zu maximieren und andere Audioquellen als die dieser Sprachanwendung zugeordneten zu minimieren.
  9. Verfahren nach Anspruch 8, wobei die Sprachanwendungen eine Freisprechtelefonanwendung beinhalten.
  10. Verfahren nach Anspruch 8, wobei die Sprachanwendungen ein im Fahrzeug mitgeführtes Kommunikationssystem beinhalten.
  11. Verfahren nach Anspruch 8, wobei die Sprachanwendungen eine Anwendung zur automatischen Spracherkennung (ASR) beinhalten.
  12. Verfahren nach Anspruch 8, wobei eine Vielzahl von unterschiedlichen Sprachanwendungen parallel arbeitet.
  13. Verfahren nach Anspruch 8, wobei der Sprachdienstraum (101) der Fahrgastraum eines Automobils ist.
  14. Verfahren nach Anspruch 8, wobei die dynamische Steuerung der Verarbeitung der Mikrofoneingangssignale und der Lautsprecherausgangssignale in Reaktion auf die Steuermechanismus-Eingaben von den Systembenutzern ausgeführt wird.
  15. Computerprogrammprodukt, das in einem nicht-flüchtigen computerlesbaren Medium für multimodale Sprachkommunikation unter Verwendung einer Vielzahl von unterschiedlichen Betriebsarten verschlüsselt ist, wobei jede Betriebsart einer aus einer Vielzahl von unterschiedlichen Sprachanwendungen (108; 308, 401) zugeordnet ist, wobei das Produkt Programmcode zum Durchführen des Verfahrens nach einem der Ansprüche 8 bis 14 umfasst.
EP12723791.5A 2012-05-16 2012-05-16 Kommunikationssystem für die kombinierte spracherkennung, freihand-kommunikation und fahrzeug-innenkommunikation Active EP2842123B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/038070 WO2013172827A1 (en) 2012-05-16 2012-05-16 Speech communication system for combined voice recognition, hands-free telephony and in-communication

Publications (2)

Publication Number Publication Date
EP2842123A1 EP2842123A1 (de) 2015-03-04
EP2842123B1 true EP2842123B1 (de) 2019-10-16

Family

ID=46168633

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12723791.5A Active EP2842123B1 (de) 2012-05-16 2012-05-16 Kommunikationssystem für die kombinierte spracherkennung, freihand-kommunikation und fahrzeug-innenkommunikation

Country Status (3)

Country Link
US (2) US9620146B2 (de)
EP (1) EP2842123B1 (de)
WO (1) WO2013172827A1 (de)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9620146B2 (en) 2012-05-16 2017-04-11 Nuance Communications, Inc. Speech communication system for combined voice recognition, hands-free telephony and in-car communication
US20140270241A1 (en) * 2013-03-15 2014-09-18 CSR Technology, Inc Method, apparatus, and manufacture for two-microphone array speech enhancement for an automotive environment
US9431013B2 (en) * 2013-11-07 2016-08-30 Continental Automotive Systems, Inc. Co-talker nulling for automatic speech recognition systems
JP6318621B2 (ja) * 2014-01-06 2018-05-09 株式会社デンソー 音声処理装置、音声処理システム、音声処理方法、音声処理プログラム
US9838782B2 (en) * 2015-03-30 2017-12-05 Bose Corporation Adaptive mixing of sub-band signals
CN105913844A (zh) * 2016-04-22 2016-08-31 乐视控股(北京)有限公司 车载语音获取方法及装置
FR3062534A1 (fr) * 2017-01-30 2018-08-03 Bodysens Procede, terminal et systeme permettant une communication vocale full-duplex ou de donnees sur un reseau autonome et une connexion directe avec d'autres moyens de communication sur d'autres reseaux
US10546655B2 (en) 2017-08-10 2020-01-28 Nuance Communications, Inc. Automated clinical documentation system and method
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
GB2567013B (en) * 2017-10-02 2021-12-01 Icp London Ltd Sound processing system
US10291996B1 (en) * 2018-01-12 2019-05-14 Ford Global Tehnologies, LLC Vehicle multi-passenger phone mode
EP3534596B1 (de) * 2018-03-02 2022-10-26 Nokia Technologies Oy Vorrichtung und zugehörige verfahren für telekommunikation
US11250382B2 (en) 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
WO2019173333A1 (en) 2018-03-05 2019-09-12 Nuance Communications, Inc. Automated clinical documentation system and method
EP3762931A4 (de) 2018-03-05 2022-05-11 Nuance Communications, Inc. System und verfahren zur überprüfung von automatisierter klinischer dokumentation
CN109545230B (zh) 2018-12-05 2021-10-19 百度在线网络技术(北京)有限公司 车辆内的音频信号处理方法和装置
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
CA3148488A1 (en) * 2019-09-02 2021-03-11 Shenbin ZHAO Vehicle avatar devices for interactive virtual assistant
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
US10999419B1 (en) * 2020-06-23 2021-05-04 Harman International Industries, Incorporated Systems and methods for in-vehicle voice calls
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method
DE102021207437A1 (de) * 2021-07-13 2023-01-19 Hyundai Motor Company Verfahren und System zum Schutz der Privatspähre bei der Vorbereitung eines Telefongesprächs zwischen einem Fahrzeuginsassen eines Fahrzeugs und einem entfernten Gesprächspartner

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6420975B1 (en) * 1999-08-25 2002-07-16 Donnelly Corporation Interior rearview mirror sound processing system
US6192339B1 (en) * 1998-11-04 2001-02-20 Intel Corporation Mechanism for managing multiple speech applications
JP2001075594A (ja) * 1999-08-31 2001-03-23 Pioneer Electronic Corp 音声認識システム
US7047192B2 (en) * 2000-06-28 2006-05-16 Poirier Darrell A Simultaneous multi-user real-time speech recognition system
US6230138B1 (en) * 2000-06-28 2001-05-08 Visteon Global Technologies, Inc. Method and apparatus for controlling multiple speech engines in an in-vehicle speech recognition system
EP1301015B1 (de) 2001-10-05 2006-01-04 Matsushita Electric Industrial Co., Ltd. Freisprecheinrichtung zur mobilen Kommunikation im Fahrzeug
DE10339973A1 (de) * 2003-08-29 2005-03-17 Daimlerchrysler Ag Intelligentes akustisches Mikrofon-Frontend mit Spracherkenner-Feedback
US7340395B2 (en) * 2004-04-23 2008-03-04 Sap Aktiengesellschaft Multiple speech recognition engines
US20120253823A1 (en) * 2004-09-10 2012-10-04 Thomas Barton Schalk Hybrid Dialog Speech Recognition for In-Vehicle Automated Interaction and In-Vehicle Interfaces Requiring Minimal Driver Processing
ATE415048T1 (de) 2005-07-28 2008-12-15 Harman Becker Automotive Sys Verbesserte kommunikation für innenräume von kraftfahrzeugen
EP1915818A1 (de) * 2005-07-29 2008-04-30 Harman International Industries, Incorporated Audio-abstimmsystem
US7904300B2 (en) * 2005-08-10 2011-03-08 Nuance Communications, Inc. Supporting multiple speech enabled user interface consoles within a motor vehicle
WO2008061205A2 (en) 2006-11-16 2008-05-22 Johnson Controls Technology Company Integrated vehicle communication system
US8140325B2 (en) * 2007-01-04 2012-03-20 International Business Machines Corporation Systems and methods for intelligent control of microphones for speech recognition applications
US20090055178A1 (en) * 2007-08-23 2009-02-26 Coon Bradley S System and method of controlling personalized settings in a vehicle
US8050419B2 (en) * 2008-03-31 2011-11-01 General Motors Llc Adaptive initial volume settings for a vehicle audio system
KR101567603B1 (ko) * 2009-05-07 2015-11-20 엘지전자 주식회사 멀티 음성 시스템의 동작 제어 장치 및 방법
US20110307250A1 (en) * 2010-06-10 2011-12-15 Gm Global Technology Operations, Inc. Modular Speech Recognition Architecture
US9641934B2 (en) * 2012-01-10 2017-05-02 Nuance Communications, Inc. In-car communication system for multiple acoustic zones
US20130304476A1 (en) * 2012-05-11 2013-11-14 Qualcomm Incorporated Audio User Interaction Recognition and Context Refinement
US9620146B2 (en) 2012-05-16 2017-04-11 Nuance Communications, Inc. Speech communication system for combined voice recognition, hands-free telephony and in-car communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US9620146B2 (en) 2017-04-11
WO2013172827A1 (en) 2013-11-21
EP2842123A1 (de) 2015-03-04
US20150120305A1 (en) 2015-04-30
US20170169836A1 (en) 2017-06-15
US9978389B2 (en) 2018-05-22

Similar Documents

Publication Publication Date Title
US9978389B2 (en) Combined voice recognition, hands-free telephony and in-car communication
CN110366156B (zh) 通讯处理方法、装置、设备、存储介质及音频管理系统
US8204550B2 (en) In-vehicle handsfree apparatus
US7257427B2 (en) System and method for managing mobile communications
EP1748636A1 (de) Verbesserte Kommunikation für Innenräume von Kraftfahrzeugen
CN105575399A (zh) 用于选择音频过滤方案的系统和方法
JP2010517328A (ja) 無線電話システムおよび該システムにおける音声信号の処理方法
MXPA06011459A (es) Metodo para controlar el procesamiento de salidas para una interfaz de comunicacion inalambrica de vehiculo.
WO2023056764A1 (zh) 一种车内通话方法、装置、系统及车辆
WO2005101674A1 (en) Methods for controlling processing of inputs to a vehicle wireless communication interface
EP3906705A1 (de) Hybrides autolautsprecher- und kopfhörerbasiertes system für akustische erweiterte realität
CN106888147B (zh) 一种车载即时通讯免提系统
CN113223550A (zh) 实时通话系统、实时通话系统的控制方法和驾驶设备
WO2014141574A1 (ja) 音声制御システム、音声制御方法、音声制御用プログラムおよび耐雑音音声出力用プログラム
JP2005328116A (ja) 車載システム
JP2003125068A (ja) 車両通話装置
US20230396925A1 (en) In-vehicle communication device and non-transitory computer-readable storage medium
CN115223582B (zh) 一种音频的噪声处理方法、系统、电子装置及介质
US11240653B2 (en) Main unit, system and method for an infotainment system of a vehicle
CN202907020U (zh) 一种车载通话系统
CN118041963A (zh) 车内音频设备控制系统及方法
CN116132564A (zh) 驾驶座舱蓝牙电话的输出方法、驾驶座舱系统和存储介质
CN117135603A (zh) 传输音频数据的装置及方法、蓝牙设备
JP2011160104A (ja) ハンズフリー用音声出力システム
JP2009141429A (ja) 車載用通信装置および通信システム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20141124

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20171219

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012064890

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G10L0015000000

Ipc: H04M0003560000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04M 1/60 20060101ALN20190409BHEP

Ipc: G10L 15/22 20060101ALN20190409BHEP

Ipc: H04M 3/56 20060101AFI20190409BHEP

Ipc: G10L 21/0272 20130101ALN20190409BHEP

Ipc: G10L 21/02 20130101ALI20190409BHEP

Ipc: H04R 3/12 20060101ALN20190409BHEP

Ipc: H04R 27/00 20060101ALN20190409BHEP

Ipc: H04R 3/00 20060101ALN20190409BHEP

Ipc: G10L 15/00 20130101ALN20190409BHEP

Ipc: G10L 21/0216 20130101ALN20190409BHEP

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 3/12 20060101ALN20190416BHEP

Ipc: H04R 27/00 20060101ALN20190416BHEP

Ipc: H04R 3/00 20060101ALN20190416BHEP

Ipc: G10L 21/0272 20130101ALN20190416BHEP

Ipc: G10L 15/22 20060101ALN20190416BHEP

Ipc: G10L 21/02 20130101ALI20190416BHEP

Ipc: H04M 3/56 20060101AFI20190416BHEP

Ipc: H04M 1/60 20060101ALN20190416BHEP

Ipc: G10L 15/00 20130101ALN20190416BHEP

Ipc: G10L 21/0216 20130101ALN20190416BHEP

INTG Intention to grant announced

Effective date: 20190506

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012064890

Country of ref document: DE

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1192446

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191115

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1192446

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191016

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200116

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200217

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200117

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200116

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012064890

Country of ref document: DE

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200216

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

26N No opposition filed

Effective date: 20200717

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20200601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200601

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200531

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20200516

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200516

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200516

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200516

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191016

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230321

Year of fee payment: 12