EP1864270A1 - Einrichtung zur kommunikation für personen mit sprach- und/oder hörbehinderung - Google Patents

Einrichtung zur kommunikation für personen mit sprach- und/oder hörbehinderung

Info

Publication number
EP1864270A1
EP1864270A1 EP06726156A EP06726156A EP1864270A1 EP 1864270 A1 EP1864270 A1 EP 1864270A1 EP 06726156 A EP06726156 A EP 06726156A EP 06726156 A EP06726156 A EP 06726156A EP 1864270 A1 EP1864270 A1 EP 1864270A1
Authority
EP
European Patent Office
Prior art keywords
stream
input
text
audio signals
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP06726156A
Other languages
English (en)
French (fr)
Inventor
Fabrice Francioli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
eROCCA
Original Assignee
eROCCA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by eROCCA filed Critical eROCCA
Publication of EP1864270A1 publication Critical patent/EP1864270A1/de
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • the present invention relates to devices for enabling communication between persons, and more especially to allow communication between a person with a speech and / or hearing disability with another person having all his speaking abilities or having also a communication disability.
  • Handicap - Incapabilities - Addiction HFD
  • the invention relates to devices for simplifying communication or triggering warning signals for persons diminished with less mobility or less autonomy (children, elderly, disabled, sick), or having just had an accident, to another entity with all its ability to speak.
  • Impaired people can usually communicate easily with healthy people close to them. When they have an accident, triggering a call for help signal is usually impossible, either because of the lack of mobility induced by the accident, or because of a lack of awareness. The absence of an alarm in the first moments after the accident generally results in a drastic aggravation of the situation, which can lead to complications until the death of the person.
  • US 2004/0073432 A1 discloses a portable device for communication by a disabled user of speech, comprising:,, *
  • a touch-screen input text interface capable of generating a stream of text data transmitted from a message entered by the user on the text input interface; a speech synthesis device receiving the stream of text data transmitted from the text input interface, and transforming it into a stream of transmitted audio signals,
  • a loudspeaker audio output interface receiving the stream of emitted audio signals, and transforming it into an acoustic wave stream emitted image of the transmitted and audible text data stream in the immediate environment of the portable device.
  • the device described in this document is a remote access to the Internet provided with communication aids.
  • the term "webpad” used indicates that it is a personal computer tablet (tablet PC) having substantially an A4 page size and a weight of 1 to 2 kilograms.
  • This type of tablet PC (“webpad”) is widespread especially in hotels in the United States to access various services (Internet, shopping, information) by the TV.
  • the tablet PC exchanges the messages by wireless local connection with an external telephone or with an internet connection device.
  • Several devices are then necessary, and the set described is not portable.
  • Such a device (“webpad") is too heavy and cumbersome to be permanently available, especially when the user is a handicapped person or diminished who moves, or who is standing. The use of both hands is necessary, and you need a support plan.
  • the invention provides a portable device for communication by a disabled user of speech, comprising:
  • a text input interface capable of generating a stream of transmitted text data image of a message entered by the user on the text input interface; a speech synthesis device receiving the stream of text; transmitted text data from the text input interface, and transforming it into a stream of transmitted audio signals,
  • an audio output interface receiving the stream of transmitted audio signals, and transforming it into a stream of acoustic waves emitted image of the transmitted and audible text data stream in the immediate environment of the portable device, the device being implemented on a cellular phone hardware base with digital personal assistant (PDA) features and an open operating system, with a touch screen, a computer architecture and a digital signal processing body.
  • PDA digital personal assistant
  • cellular phone hardware base incorporating digital personal assistant (PDA) functions, also called a smartphone, which can also be defined as a digital personal assistant (PDA) with the means and functions of a cellular phone (also called communicating PDA), designates a portable electronic device, integrated in a housing holding in the hand of the user, comprising a computer architecture (processors, memories, input-output, software), the electronic circuits of a cellular telephone , with a touch screen and a digital signal processing body.
  • PDA digital personal assistant
  • This hardware base completely solves the mobility requirement of the device without compromising the ergonomics and ease of use.
  • the voice synthesis device may advantageously comprise embedded speech synthesis software.
  • an on-board voice synthesizer of a type dedicated to the automotive market or global positioning systems can be used (Loquendo Automotive Solution from Loquendo, or Acapela Onboard from Acapela Group). In this way, the cost of development and production of such a communication device is particularly reduced, and the volume is also reduced.
  • the invention proposes, in a preferred embodiment, that the text input interface is connected to the speech synthesis device by an automatic processing module constituting a programming interface (AP), including a parameterizable and programmable interface routine for adaptation to different input interface modules.
  • API programming interface
  • a user may have access to the interface routine, for example to adapt a new input interface module compatible with a particular handicap, or to delete an unnecessary input interface module.
  • the text input interface may advantageously comprise several text input modules, each of which is capable of generating a flow of text data transmitted from a solicitation of a nature distinct from the user, and which transmit the stream of text data transmitted to the speech synthesis device.
  • the text entry interface comprises at least two of the input modules, preferably all of the input modules of the family comprising:
  • a second pictogram input module each generating, on manual request from the user, an image text data stream of a pre-recorded word or group of words or phrases,
  • a fifth sensory glove input module provided with sensors and a motion decoder associating a word or a phoneme at each position of the user's hand (s). wearing the sensory gloves,. . .
  • a sixth input electroacoustic transducer input module and an automatic speech recognition device for transforming speech acoustic signals emitted by the user into a text data stream.
  • a person with a disability or a diminished person can usefully choose to communicate using one or the other of the input modules, which he will choose according to the communication stage considered. to say according to the message to transmit, or according to the circumstances (handicap, accident ).
  • the third input module can advantageously use the phonemes of the BOREL-MAISONNY method or the Spoken Speech Language (PLC). In this way, the disabled person can use his knowledge of these particular methods that are very widely used.
  • the device according to the invention furthermore uses the radiofrequency transmitter of the hardware base, which receives the stream of audio signals emitted from the speech synthesis device and which transmits as radio waves.
  • the disabled person can communicate with a distant person.
  • the device thus determined thus makes it possible to send messages to a remote person, who will receive them with a standard receiving device.
  • the radiofrequency transmitter may be of the mobile phone type, for transmission and reception according to the GSM standard.
  • the device can advantageously use the radio frequency receiver, which receives radio waves conveying reception audio signals and extracts a stream of reception audio signals.
  • An electroacoustic receiving transducer is then used to receive the stream of receiving audio signals and transform it into an audible reception sound wave stream in the immediate environment of the portable device.
  • Such a device is then suitable for use by a person with a speech disability, but not disabled by hearing.
  • the device further comprises an automatic speech recognition module, which receives the stream of reception audio signals and transforms it into a stream of text data sent to display means for displaying the image text data of the reception audio signal stream.
  • the device may include an automatic speech recognition module, which receives the stream of reception audio signals and transforms it into a data stream for animating the face of an avatar to allow reading lip.
  • an audio switching device which receives the stream of audio signals from the speech synthesis device and selectively transmits it:
  • VoIP voice over IP
  • the input electroacoustic transducer adapted to receive the voice of a local interlocutor and transform it into a stream of input audio signals; the audio switcher receives on the one hand the stream of input audio signals from the electroacoustic transducer input, and / or on the other hand the stream of reception audio signals from the radio frequency receiver, and transmits them in sequence to the automatic speech recognition module for displaying the input message or the reception message.
  • the automatic speech recognition module for displaying the input message or the reception message.
  • FIG. 1 schematically illustrates a portable device according to a simplified embodiment of the invention
  • FIG. 2 illustrates a. schematic structure of a portable device according to a second embodiment of the invention
  • FIG. 3 is a diagrammatic view of the detail of the device of FIG. 2;
  • FIGS. 5 and 6 illustrate two embodiments of the invention.
  • FIG. 7 illustrates the architecture of a communicating smartphone or PDA hardware base.
  • the device according to the invention is implemented on a cellular telephone hardware base integrating digital personal assistant (PDA) functions, of smartphone or PDA communicating type, and equipped with an open operating system .
  • PDA digital personal assistant
  • Such a system comprises in series a touch screen and a digital signal processing body (for example DSP) easily configurable to constitute the speech synthesis device and all or part of the text input interface 1.
  • Figure 7 illustrates schematically the architecture of such a hardware base type smartphone or PDA communicating.
  • the cellular telephone hardware base 30 comprises a radio frequency transmitter-receiver circuit 8 and a subset of signal management 32.
  • the signal management subsystem 32 a first signal management subassembly 32.
  • processor 33 constituting, with suitable software implemented in the processor 33 or stored in a memory 39, a digital signal processing body, for example of the DSP type, capable of processing the telecommunication signals at high speed.
  • the first processor 33 communicates with the radiofrequency transceiver 8 and manages simple interfaces such as an audio interface 34, light-emitting diodes 35, vibrating transducers 36, a telephone keyboard 37, a memory card 38 of SIM type, a memory 39 dedicated.
  • the digital personal assistant hardware base 40 comprises a second processor 41 which, with appropriate software, constitutes a management computer architecture and communicates with more complex interfaces such as an LCD screen 42, a touch screen 43, a flash memory 44, an audio interface 45, and optionally with an interface circuit 46 to a communication circuit 47 of WiFi type.
  • this hardware base comprises two distinct processors 33 and
  • the first processor 33 being dedicated to the specific management of telecommunication signals
  • the second processor 41 being dedicated to the management of complex peripherals. Sufficient speed is thus achieved for the processing of the telecommunication signals, and sufficient management power to manage the touch screen, the input.
  • the portable device comprises a text input interface 1, which the user can request to generate a stream of transmitted text data 2 which is the image of a message that the user wishes to enter on the text input interface 1.
  • the transmitted text data stream 2 is sent to a speech synthesis device 3, which transforms it into a stream of transmitted audio signals 4
  • An audio output interface 5 receives the transmitted audio signal stream 4, and transforms it into an emitted acoustic wave stream 6 which is the image of the transmitted text data stream 2 and which is audible in the immediate environment of the portable device.
  • a speech disabled user can enter a message in the text entry interface 1, by means other than speech, for example by a manual action, and the device transforms this solicitation into a stream of transmitted acoustic waves 6, that a speaker can hear directly in the immediate environment of the portable device.
  • the text input interface 1 may for example be implemented in the form of the touch screen 43 (FIG. 7).
  • the voice synthesis function 3 can be performed by the first processor 33 and an associated program, for example an embedded voice synthesis software of a type dedicated to the automotive market or global positioning systems (GPS).
  • the audio output interface 5 may comprise a local amplifier 5a which supplies a loudspeaker 5b.
  • the input interface 1 is connected to the speech synthesis device 3 by an automatic processing module comprising a parameterizable and programmable interface subprogram for adaptation to different modules of the interface.
  • This programmable automatic and programmable interface subroutine processing module constitutes an open programming interface 7 (Open API), which offers the possibility of developing a new text input means without having to modify the architecture of the device nor to appeal to the manufacturer. This makes it easy to adapt the device to various handicaps or various problems, which force the user to use such or such means to enter the messages in the text entry interface 1 or to trigger alarms. This is also useful when a handicap or a sum of handicaps are not taken into account by the standard device.
  • Open API open programming interface 7
  • the text input interface 1 comprises several text input modules 1a, 1b, 1c, 1d, 1e and 1f, each of which is capable of generating a flow. of transmitted text data 2 from a solicitation of a nature distinct from the user, and which transmits the transmitted text data stream 2 to the speech synthesis device 3 via the programming interface 7.
  • the first text input module 1a may for example be an alphanumeric keyboard, allowing the input of text, numbers and punctuation.
  • the keyboard can be virtual, in the form of the touch screen 43 ( Figure 7), or in the form of a keyboard projected on a surface through visible or physical radiation.
  • the second input module 1b may comprise a series of pictograms each generating, on manual request from the user, a flow of text image data of a word or a group of words or a pre-recorded sentence.
  • a pictogram can be associated with a phrase in ready-made text like "Hello, my name is Fanny, I am mute and speak with a voice synthesizer", or like "Hello, my name is Fanny, I live at 48 rue Vendians and I need ... ".
  • the pictogram provides ease for repetitive phrases. The user can create, modify or delete at will a pictogram and its associated sentence.
  • a sentence may have variable elements depending on external parameters. For example, if it is 8 pm, the presentation pictogram will say “Good evening ", instead of saying “Hello !”.
  • the third input module 1c may include phoneme, space and punctuation keys. This module allows the creation of phonetic or correctly spelled text from a selection of phonemes, from the touch screen. In order to accelerate the learning of these phonemes, it is advantageous to choose existing phonemes such as those of the BOREL-MAISONNY method or the Completed Spoken Language (PLC), or any other sentence decomposition into phonemes. In order for the text to be interpreted correctly with the appropriate intonation, phonemes are enriched with symbols to create the space between words and punctuation. The concatenation of phonemes, spaces and punctuation, gives a sentence exploitable by a vocal synthesis.
  • PLC Completed Spoken Language
  • the fourth input module 1d can be a handwriting recognition screen. This handwriting is transformed into a string of characters interpretable by a computer system.
  • the fifth input module 1e may comprise one or two sensory gloves, provided with a sensor and a motion decoder associating a word or a phoneme at each position of the user's hands or wearing the sensory gloves or gloves.
  • This module allows you to create the sentence using the gestures specific to each sign language. It allows, for example, a user accustomed to using the BOREL-MAISONNY method, to build his sentence using his usual sign language.
  • One or two sensory gloves, according to the sign language used, allows the motion decoder to associate a phoneme or a word with each position of the hands.
  • Sensory gloves can be replaced by a video camera and appropriate signal processing, or by breaking wave beams positioned in all three dimensions. The text is enriched with new signs to create the space between words and punctuation.
  • the sixth input module 1f may include an input electroacoustic transducer and an automatic speech recognition device, for transforming speech acoustic signals emitted by the user into a text data stream 2.
  • This module may allow to correct inaudible sentences for the purpose of re-education or conversation: this is a device for a deaf person who wishes to express himself orally. The user speaks in the device.
  • the automatic speech recognition device transcribes the dictated sentence into text, which is corrected automatically if possible or manually with the help of the user who dictated the sentence.
  • the device can inculcate a rhythm to speech thanks to an integrated metronome (visual or auditory).
  • the programming interface 7 allows a third party developer to transmit his text to the system without having to go through one. predefined entries by the application ...
  • This abstraction layer provides flexibility in product development. Indeed, text input modules. 1a-1f can be developed with total independence, by different teams, with different techniques. In addition, opening this layer to independent developers allows you to create new text entry modules that are tailored to particular disabilities without conflicting with the original application.
  • the speech synthesis device 3 can comprise two successive modules that can be distinguished, namely a text generator module 3a and a speech synthesis module itself 3b.
  • the text generator module 3a prepares the text for speech synthesis. For example, it can inject standard vocal changes into the speech synthesis, by playing on the timbre, the attack, the speech speed and the sound volume, in order to personalize the synthesized voice.
  • the speech synthesis module itself 3b transforms the data stream as text into a stream of transmitted audio signals 4.
  • the device further uses the radio frequency transmitter 8, which receives the stream of transmitted audio signals 4 from the voice synthesis device 3 and which transmits it in the form of radio waves 9.
  • the radiofrequency transmitter 8 incorporates a radio frequency receiver, which receives radio waves conveying reception audio signals and extracts a stream of reception audio signals 10.
  • audio reception signals 10 is sent to an electroacoustic receiving transducer, for example constituted by the amplifier 5a and the loudspeaker 5b, which transforms it into an audible reception acoustic wave stream 6 in the immediate environment of the device portable.
  • applications can be implemented on the device to monitor in real time its use in one of its modes of operation.
  • the device can be used as a television remote control. The prolonged non-use of this device results in requests for actions towards the user (erasing errors, answering questions). An unsatisfactory response to these stimuli results in the triggering of alarm systems via the radiofrequency module 8.
  • applications can be implemented on the device in order to follow in real time the sound environment of the diminished person to automatically detect a sound or a series of sounds.
  • the device can be used to analyze the ambient noise in real time in order to detect help calls launched to the system.Use is to enable diminished people who have fallen and are still conscious to make a call.
  • the system analyzes the ambient noise in real time.When an abnormal noise is detected in the person's environment, this results in requests for actions to the user (erasing errors, responses Unsatisfactory response to these stimuli results in the triggering of alarm systems via the radio frequency module 8.
  • the device of FIG. 2 further comprises an audio switching device 11, which receives the stream of audio signals 4 coming from the speech synthesis device 3 and which selectively transmits it to the local amplifier 5a of the audio output interface, and / or to the radiofrequency transmitter 8, and / or to a voice over IP module 12 which transposes the stream of audio signals 4 to be exploitable by an IP network, and which transmits the transposed stream to the radiofrequency transmitter 8.
  • FIG. 3 is now considered , which illustrates in more detail the operating mode of the audio switching device 11.
  • the audio switching device 1 1 distributes the audio stream 4 from the speech synthesis device 3 to the devices concerned. According to the context of use (noted 13) of the device by the user, the audio switching device 11 makes the decision to feed one or more voice outputs.
  • the audio switching device 1 1 redirects the audio stream 4 on the radio frequency transmitter 8, and simultaneously on the amplifier local 5a.
  • the audio switching device 11 redirects the audio stream 4 only to the local amplifier 5a.
  • voice over IP (VoIP) mode without amplified listening, the audio switcher device 11 redirects the audio stream 4 to the voice over IP module 12 only.
  • VoIP voice over IP
  • the device can also be adapted so that a diminished user, for example aged or injured, can emit an audio message locally or remotely, the message being perceived by the interlocutor as a request for help or a call for help generated automatically by the device.
  • the user speech handicapped or decreased can receive back a response in voice form, 'directly as in the devices commonly used by healthy people.
  • Figure 4 illustrates an improvement of the previous device, allowing use by a disabled user of hearing.
  • the radiofrequency transmitter 8 the voice over IP device 12, the audio switching device 11.
  • An electroacoustic input transducer 14 is used, able to receive the voice of a local interlocutor and to transform it in an input audio signal stream 15, transmitted to the audio switching device 11.
  • the audio switching device 11 thus receives, on the one hand, the stream of audio input signals coming from the electroacoustic input transducer 14, and on the other hand the stream of audio reception signals 10 coming from the radio frequency transmitter 8 or the device
  • the audio switching device 11 transmits them in sequence to an automatic speech recognition module 16, which transforms them into a stream of text data 17 transmitted to a display 18 which ensures the display of the message. input contained in the input audio signal stream 15 or the reception message contained in the reception audio signal stream 10.
  • the invention described above has been implemented and carried out on an iPAQ hp5540 (brand of use) under Pocket PC 2003 (brand of use).
  • This iPAQ brand of use
  • This iPAQ has a WiFi interface for communicating voice over VoIP over a wireless Internet network.
  • the demonstration uses a SaySo speech synthesis engine (brand of use) provided by the company ELAN.
  • the sentence entry is carried out with a virtual keyboard containing the phonemes of the BOREL-MAISONNY method. Space key and punctuation keys main were added to obtain a synthetic voice, understandable, pleasant, human. See the drawing of Figure 5.
  • PiPAQ brand of use
  • Skype software WIN CE brand name
  • the invention may provide automatic detection means for the protective case, described below.
  • the device incorporates an algorithm that includes a sequence for listening to the sounds generated by the portable device, and that analyzes the amplitude of the high frequencies of the sounds received by comparing it with a given threshold. If this amplitude is lower than the given threshold, then the device deduces that there is presence of a protection, mechanical, and the algorithm modifies accordingly the treatments applied to the audio stream to correct the influence of the mechanical protection on the sounds emitted. It will be understood that such sound correction means can be used independently of the other means previously described.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
EP06726156A 2005-03-31 2006-03-31 Einrichtung zur kommunikation für personen mit sprach- und/oder hörbehinderung Ceased EP1864270A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0503386A FR2884023B1 (fr) 2005-03-31 2005-03-31 Dispositif pour la communication par des personnes handicapees de la parole et/ou de l'ouie
PCT/FR2006/000707 WO2006103358A1 (fr) 2005-03-31 2006-03-31 Dispositif pour la communication par des personnes handicapees de la parole et/ou de l'ouïe

Publications (1)

Publication Number Publication Date
EP1864270A1 true EP1864270A1 (de) 2007-12-12

Family

ID=34979245

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06726156A Ceased EP1864270A1 (de) 2005-03-31 2006-03-31 Einrichtung zur kommunikation für personen mit sprach- und/oder hörbehinderung

Country Status (5)

Country Link
US (1) US8082152B2 (de)
EP (1) EP1864270A1 (de)
CA (1) CA2602633C (de)
FR (1) FR2884023B1 (de)
WO (1) WO2006103358A1 (de)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5318572B2 (ja) * 2005-07-15 2013-10-16 モエ,リチャード,エイ 音声発音教育装置並びに音声発音教育方法および音声発音教育プログラム
US8195457B1 (en) * 2007-01-05 2012-06-05 Cousins Intellectual Properties, Llc System and method for automatically sending text of spoken messages in voice conversations with voice over IP software
US8595642B1 (en) 2007-10-04 2013-11-26 Great Northern Research, LLC Multiple shell multi faceted graphical user interface
CN102044128A (zh) * 2009-10-23 2011-05-04 鸿富锦精密工业(深圳)有限公司 紧急事件报警系统及方法
US20130332952A1 (en) * 2010-04-12 2013-12-12 Atul Anandpura Method and Apparatus for Adding User Preferred Information To Video on TV
US8751215B2 (en) * 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
US8930192B1 (en) * 2010-07-27 2015-01-06 Colvard Learning Systems, Llc Computer-based grapheme-to-speech conversion using a pointing device
US8812973B1 (en) 2010-12-07 2014-08-19 Google Inc. Mobile device text-formatting
US9717090B2 (en) 2010-12-31 2017-07-25 Microsoft Technology Licensing, Llc Providing notifications of call-related services
US8963982B2 (en) * 2010-12-31 2015-02-24 Skype Communication system and method
US10291660B2 (en) 2010-12-31 2019-05-14 Skype Communication system and method
US10404762B2 (en) 2010-12-31 2019-09-03 Skype Communication system and method
US9072478B1 (en) * 2013-06-10 2015-07-07 AutismSees LLC System and method for improving presentation skills
CN107004404B (zh) * 2014-11-25 2021-01-29 三菱电机株式会社 信息提供系统
WO2016196041A1 (en) * 2015-06-05 2016-12-08 Trustees Of Boston University Low-dimensional real-time concatenative speech synthesizer
US10854110B2 (en) 2017-03-03 2020-12-01 Microsoft Technology Licensing, Llc Automated real time interpreter service
US11106905B2 (en) * 2018-09-04 2021-08-31 Cerence Operating Company Multi-character text input system with audio feedback and word completion
US11264035B2 (en) 2019-01-05 2022-03-01 Starkey Laboratories, Inc. Audio signal processing for automatic transcription using ear-wearable device
US11264029B2 (en) 2019-01-05 2022-03-01 Starkey Laboratories, Inc. Local artificial intelligence assistant system with ear-wearable device
CN112927704A (zh) * 2021-01-20 2021-06-08 中国人民解放军海军航空大学 一种沉默式全天候单兵通信系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146502A (en) * 1990-02-26 1992-09-08 Davis, Van Nortwick & Company Speech pattern correction device for deaf and voice-impaired
US5210689A (en) * 1990-12-28 1993-05-11 Semantic Compaction Systems System and method for automatically selecting among a plurality of input modes
JP3624733B2 (ja) * 1999-01-22 2005-03-02 株式会社日立製作所 手話メール装置及び手話情報処理装置
GB2357943B (en) * 1999-12-30 2004-12-08 Nokia Mobile Phones Ltd User interface for text to speech conversion
US20030028379A1 (en) * 2001-08-03 2003-02-06 Wendt David M. System for converting electronic content to a transmittable signal and transmitting the resulting signal
US20030223455A1 (en) * 2002-05-29 2003-12-04 Electronic Data Systems Corporation Method and system for communication using a portable device
GB2389762A (en) * 2002-06-13 2003-12-17 Seiko Epson Corp A semiconductor chip which includes a text to speech (TTS) system, for a mobile telephone or other electronic product
US20040073432A1 (en) * 2002-10-15 2004-04-15 Stone Christopher J. Webpad for the disabled
EP1431958B1 (de) * 2002-12-16 2018-07-18 Sony Mobile Communications Inc. Gerät enthaltend oder anschliessbar zu einer Vorrichtung zur Erzeugung von Sprachsignalen, und Computerprogramm dafür
WO2004114107A1 (en) * 2003-06-20 2004-12-29 Nadeem Mohammad Qadir Human-assistive wearable audio-visual inter-communication apparatus.

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006103358A1 *

Also Published As

Publication number Publication date
US20080195394A1 (en) 2008-08-14
CA2602633A1 (fr) 2006-10-05
FR2884023A1 (fr) 2006-10-06
FR2884023B1 (fr) 2011-04-22
WO2006103358A1 (fr) 2006-10-05
US8082152B2 (en) 2011-12-20
CA2602633C (fr) 2014-08-19

Similar Documents

Publication Publication Date Title
CA2602633C (fr) Dispositif pour la communication par des personnes handicapees de la parole et/ou de l'ouie
US20220335941A1 (en) Dynamic and/or context-specific hot words to invoke automated assistant
US7676372B1 (en) Prosthetic hearing device that transforms a detected speech into a speech of a speech form assistive in understanding the semantic meaning in the detected speech
CN113470641B (zh) 数字助理的语音触发器
Robitaille The illustrated guide to assistive technology and devices: Tools and gadgets for living independently
US20140171036A1 (en) Method of communication
WO2020040745A1 (en) Dynamic and/or context-specific hot words to invoke automated assistant
Dhanjal et al. Tools and techniques of assistive technology for hearing impaired people
US20070204187A1 (en) Method, system and storage medium for a multi use water resistant or waterproof recording and communications device
EP1998729A1 (de) System für hörgeschädigte personen
US11917092B2 (en) Systems and methods for detecting voice commands to generate a peer-to-peer communication link
Kvale et al. Multimodal Interfaces to Mobile Terminals–A Design-For-All Approach
Robitaille The Illustrated Guide to Assistive Technology and Devices (EasyRead Super Large 20pt Edition)
FR2901396A1 (fr) Dispositif de communication vocale ou non vocale portable et interactif et universel pour deficients ou handicapes de la parole et muets
Staš et al. Hear IT–A Mobile Assistive Technology for Hearing Impaired People in Slovak
WO2003015884A1 (fr) Jeux massivement online comprenant un systeme de modulation et de compression de la voix
Johansen et al. Mapping auditory percepts into visual interfaces for hearing impaired users
EP1640939A1 (de) Kommunikationsvorrichtung
FR3106009A1 (fr) Procédé et dispositif de sélection de divertissements par un assistant personnel virtuel embarqué dans un véhicule automobile, et véhicule automobile l’incorporant
EP1745466A2 (de) Verbesserte steuervorrichtung zum lesen von texten für sehbehinderte
Cullen et al. Vocate: Auditory Interfaces for Location-based Services
WO2007090937A1 (fr) Dispositif portatif emettant des sons suite a une pression exercee sur des touches.
WO2010089262A1 (fr) Procede et dispositif d'ecriture universelle naturelle

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070929

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20130418

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20170115