WO2020079655A1 - Assistance system and method for users having communicative disorder - Google Patents

Assistance system and method for users having communicative disorder Download PDF

Info

Publication number
WO2020079655A1
WO2020079655A1 PCT/IB2019/058894 IB2019058894W WO2020079655A1 WO 2020079655 A1 WO2020079655 A1 WO 2020079655A1 IB 2019058894 W IB2019058894 W IB 2019058894W WO 2020079655 A1 WO2020079655 A1 WO 2020079655A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio signals
input audio
users
words
meaningful
Prior art date
Application number
PCT/IB2019/058894
Other languages
French (fr)
Inventor
Andrea Previato
Original Assignee
Andrea Previato
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Andrea Previato filed Critical Andrea Previato
Publication of WO2020079655A1 publication Critical patent/WO2020079655A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute

Definitions

  • the present invention relates to an assistance system for users having communicative disorder.
  • the system comprises acquisition means of input audio signals emitted by at least one of said users and means for the reproduction of output audio signals .
  • the present invention is particularly aimed at users having communicative speech disorders, not allowing them to express themselves correctly towards potential interlocutors .
  • the disabled person is not independent in everyday life and is always in need of a person by his / her side and, for disabled people who cannot rely on close people, such solution is also particularly expensive, requiring the payment of specialized people and speech therapists helping the disabled person on a daily basis .
  • Purpose of the present patent application is therefore to allow a person having forms of communicative speech disorder, of any level, to be able to transform any of his / her words, his / her voice messages, into perfectly comprehensible and correct words, and allow these disabled people to be able to also speak and communicate correctly with anyone, while continuing to utilize their own expressions and words that are incorrect and difficult to understand, improving their quality of life .
  • the present invention achieves the above purposes, by providing a system as previously described, wherein there are input audio signals processing means, configured in such a way as to pair with each input audio signal one or more meaningful words .
  • processing means comprising at least one storage unit within which unit there are databases containing pairing tables of the input audio signals with meaningful words .
  • processing means and / or the reproduction means comprise means for generating an output audio signal relating to the meaningful words .
  • the system object of the present invention provides a voice recognition system, called “Anch'io", allowing users with speech disorder to be able to correctly express themselves.
  • the system integrates a "speaker dependant" type system that processes the input audio signals in order to obtain the pairing with meaningful words, preferably in text format.
  • a text recognition system such as a speech synthesizer of the "Text To Speech (TTS) " type or the like, transforms the meaningful words into a correct phonetic word (audio) .
  • TTS Text To Speech
  • such word may be voiced by the reproduction means which can comprise, for example, speakers .
  • Paring the input audio signals with the meaningful words can for example, occur according to predetermined schemes and algorithms that are provided on the basis of users and expert operators .
  • the system object of the present invention provides for customizing the pairing tables for each user utilizing such system.
  • the processing unit comprises a user interface, which provides for an audio signal acquisition unit and means for pairing said audio signals with meaningful words .
  • a user interface which provides for an audio signal acquisition unit and means for pairing said audio signals with meaningful words .
  • This variant embodiment has a dual value, as well as particularly advantageous aspects.
  • such databases or pairing tables can be modified on the basis of changes in disorder and also allow pairing a single meaningful word to one or more input audio signals, facilitating the use of the system object of the present invention, as well as improving the quality of life of the disabled person.
  • contingent events may occur causing the variation of audio signals emitted by the disabled person
  • audio signals that are however aimed at expressing the same word for example fatigue, agitation, emotion, physical decay are all factors which can cause variation in the expression of the disabled person.
  • pairing more than one sound with a single meaningful word can be a particularly advantageous aspect .
  • the person assisting the disabled person may also modify the pairing tables or databases, should he / she become aware of a change in the expression of the disabled person.
  • the speaker dependant system contains in the database the possibility of creating a speech vocabulary customized by the user (therefore also vocally incorrect) , linked to a grammatical vocabulary containing meaningful words in text format, already inserted by the manufacturer, while the "TTS" type system contains in its database a correct speech vocabulary, not customizable by the user, yet already standardized by the manufacturer, always linked to a grammatical vocabulary containing meaningful words in text format, also inserted by the manufacturer.
  • the system object of the present invention it is possible to combine in a single database the database of meaningful words in text format, communicating, on the one hand, with the input audio signals in order to pair such words with the sounds emitted by the disabled person and, on the other hand, with the TTS system in order to pair such meaningful words with phonemes which are also meaningful .
  • system object of the present invention can involve terms of different languages and, preferably, the user may choose his / her belonging language .
  • the acquisition means, the processing means and the reproduction means are inserted within a user device .
  • such device preferably has communication means with one or more operating units.
  • the user device can be connected with smartphones, tablets or the like to allow the voice activation of such devices or even simply to be able to perform a call so that the interlocutor may perfectly understand the disabled person.
  • the user device may have a transmission / reception unit suitable for the wireless transmission / reception of data with one or more remote units.
  • Such configuration can allow, for example, the updating of databases or software programs loaded within the storage unit .
  • the present invention also relates to an assistance method for users having communicative disorder, comprising the following steps :
  • a step b) relative to the processing of the input audio signals is provided between the acquisition step and the reproduction step.
  • Such processing step provides for performing a pairing of each input audio signal with at least one meaningful word, after which it provides for performing a generation of an output audio signal relating to the meaningful words .
  • the input audio signals relating to incorrectly pronounced words or poorly comprehensible sounds are processed in order to obtain meaningful words, preferably in written text format.
  • Such written text is then processed in order to emit sounds based on the meaningful words to result in meaningful speeches .
  • the pairing step is advantageously created through pairing tables and databases, the generation of which is provided prior to the acquisition of the user audio signals .
  • Such tables are preferably customized for each user.
  • a step d) relating to the generation of control signals for controlling one or more operating units is provided.
  • control signals can mean the actual generation of signals to activate voice-activated devices, or for example also the generation of words that are utilized during a common telephone conversation .
  • Figure 1 illustrates a schematic diagram, through functional blocks, of a preferred embodiment of the system object of the present invention
  • Figure 2 illustrates a flow chart suitable for describing a possible embodiment of the method object of the present invention.
  • the assistance system for disabled users is depicted in a single user device 1, having acquisition means 10 of input audio signals 21 emitted by a disabled user 2 and reproduction means 11 of output audio signals 3.
  • the acquisition means 10 can consist of, for example, a device of the microphone type or the like, as in the known voice recorders, and are suitable for acquiring and storing the input audio signal 21.
  • the reproduction means 11 are suitable for generating an output audio signal 3 and can consist of, for example, devices of the speaker type, also provided in the common voice recorders .
  • the input audio signals 21 consist of incorrect words or incomprehensible sounds or hisses for an interlocutor of the user 2.
  • the device 1 comprises processing means 12 which communicate with the acquisition means 10 and with the reproduction means 11.
  • the processing means 12 comprise at least one storage unit 13 having databases containing pairing tables of the input audio signals with meaningful words .
  • the processing means 12 are therefore configured in such a way as to pair with each input audio signal 21 one or more meaningful words.
  • the input audio signals are acquired by the acquisition means 10 and stored, either within the storage unit 13 or within the acquisition means themselves .
  • the processing means 12 then search within the storage unit 13 which meaningful word is paired with the recorded input audio signal 21.
  • the processing means 12 communicate such word to the reproduction means 11, which have means for generating the output audio signal 3 relating to the meaningful words .
  • Recognising and identifying the type of input audio signal 21 that is acquired can occur according to one of the ways known in the state of the art, for example it is possible to identify the signal spectrum and to find the shape of the spectrum within the storage unit 13 and to discover with which meaningful word it is paired.
  • the communication between the acquisition means 10, the reproduction means 11 and the processing means 13 allows therefore to transform an incomprehensible and phonetically and grammatically incorrect input audio signal 21 into an output audio signal containing only meaningful words .
  • the device 1 can be integrated with connection means with operating units, for example cables carrying the signal generated by the reproduction means 11 to voice-activated devices.
  • the device 1 with video signal acquisition means, for example a camera, so as to also acquire the gestures and facial expressions of the user 2.
  • video signal acquisition means for example a camera
  • the storage unit 13 can be customized by storing the sounds and words pronounced by the user 2.
  • Storing gestures and facial expressions can synergically work with the acquired input audio signals 21, in a way that it can confirm the correct pairing with meaningful words .
  • the correct pairing may be validated whenever both the sound and the gesture stored within the storage unit 13 are present .
  • a system mapping step is provided, relating with the creation of pairing tables 60.
  • Such creation preferably occurs through a recording 601 of sounds emitted by the user and a pairing 602 of such sounds with meaningful words .
  • the pairing step 602 can for example occur through the disabled person assistant, specifically preinstructed, or by specialized staff who helps the disabled user to correctly express himself / herself, or for example by showing to the user a list of words or images so that he / she can autonomously indicate what he / she meant with the emitted sound.
  • mapping step is obviously particularly important and allows creating schemes, databases and pairing tables customized according to the user.
  • the acquisition step 61 occurs, wherein the acquisition means 10 record the input audio signal 21 emitted by the user.
  • a processing step 62 is carried out over the input audio signal .
  • the pairing tables generated in step 60 are utilized: within such pairing tables, all the sounds that the user emits are present and are paired with meaningful words, preferably in text format .
  • the processing step 62 consists of a step 621 wherein the pairing of the input audio signal with a meaningful word in text format occurs .
  • the processing means search for the acquired signal among the pre-recorded signals and identify the paired meaningful word.
  • a step 622 is provided relating to the pairing of such textual word with a sound expressing meaningful words.
  • step 621 is personal and specific for each user, as well as achievable thanks to the mapping previously described.
  • Step 622 is based on known databases allowing to automatically "read” text words and transform them into phonemes.
  • the actual reproduction 63 of the input audio signal 21 transformed into meaningful words can occur.
  • step 622 and step 63 it is possible to provide for the use of timing means which control how much time elapses between an audio signal and the next one .
  • the device 1 of Figure 1 can be implemented with a display allowing to display the meaningful words in text format resulting from the step 621.
  • the device 1 may therefore provide therein a vocabulary that is displayable on the screen, variable on the basis of the nationality language chosen during the initial setting step.
  • All the words composing the vocabulary will therefore be preferably displayable, with the possibility of displaying them in several order types, and within the chosen order, in several list types.
  • each present word for example through a flag, so as to identify which words belong to that vocabulary have already been linked to an incorrect vocal word, registered by the disabled user.
  • a secondary flag may be provided, suitable for identifying and, possibly, listening to each single recording linked to said meaningful word.
  • the system object of the present invention provides for the possibility of ordering the words in an order suggested by the manufacturer of the device 1, since already pre-arranged in primary communicative importance lists and possibly suggesting to the disabled user assistant which words are normally the most needed and used on a daily basis .

Abstract

Assistance system (2) for users having communicative disorder comprising acquisition means (10) of input audio signals (21) emitted by at least one of said users (2) and reproduction means (11) of output audio signals (3). In particular, processing means (12) of said input audio signals (21) are present, which processing means (12) are configured in such a way as to pair to each input audio signal (21) one or more meaningful words, said processing means (12) comprising at least one storage unit (13) within which unit (13) databases containing pairing tables of the input audio signals (21) with meaningful words are present, comprising said processing means (12) and / or said reproduction means (11) means for generating an output audio signal (3) relating to said meaningful words.

Description

ASSISTANCE SYSTEM AND METHOD FOR USERS HAVING
COMMUNICATIVE DISORDER
DESCRIPTION
The present invention relates to an assistance system for users having communicative disorder.
The system comprises acquisition means of input audio signals emitted by at least one of said users and means for the reproduction of output audio signals .
The present invention is particularly aimed at users having communicative speech disorders, not allowing them to express themselves correctly towards potential interlocutors .
Generally, such users communicate incorrectly, through distorted or incomplete words, or even through hisses and sounds that are incomprehensible.
It is evident that such disorder represents an obstacle that is difficult to overcome for the growth and social relationships of users having these problems .
Such disorder is further aggravated by the progress of modern technology, which increasingly provides for software, devices, such as smartphones, tablets, PCs or the like that feature voice-activated controls .
In the state of the art, the only solution is the presence of people working alongside such users and trying to interpret their language based on experience, knowledge of the disabled users and the gestures of the latter. However, such solution is particularly inefficient .
First of all, the disabled person is not independent in everyday life and is always in need of a person by his / her side and, for disabled people who cannot rely on close people, such solution is also particularly expensive, requiring the payment of specialized people and speech therapists helping the disabled person on a daily basis .
There is therefore a need, which is not satisfied by systems known in the state of the art, to provide an assistance system for disabled person resolving the above-described disadvantages .
Purpose of the present patent application is therefore to allow a person having forms of communicative speech disorder, of any level, to be able to transform any of his / her words, his / her voice messages, into perfectly comprehensible and correct words, and allow these disabled people to be able to also speak and communicate correctly with anyone, while continuing to utilize their own expressions and words that are incorrect and difficult to understand, improving their quality of life .
The present invention achieves the above purposes, by providing a system as previously described, wherein there are input audio signals processing means, configured in such a way as to pair with each input audio signal one or more meaningful words .
Furthermore, the processing means comprising at least one storage unit within which unit there are databases containing pairing tables of the input audio signals with meaningful words . Finally, the processing means and / or the reproduction means comprise means for generating an output audio signal relating to the meaningful words .
It is evident that the system object of the present invention provides a voice recognition system, called "Anch'io", allowing users with speech disorder to be able to correctly express themselves.
In fact, the system integrates a "speaker dependant" type system that processes the input audio signals in order to obtain the pairing with meaningful words, preferably in text format.
Once the pairing is carried out, a text recognition system, such as a speech synthesizer of the "Text To Speech (TTS) " type or the like, transforms the meaningful words into a correct phonetic word (audio) .
As will appear from the illustration of some exemplary embodiments, such word may be voiced by the reproduction means which can comprise, for example, speakers .
Paring the input audio signals with the meaningful words, can for example, occur according to predetermined schemes and algorithms that are provided on the basis of users and expert operators .
Advantageously, however, the system object of the present invention provides for customizing the pairing tables for each user utilizing such system.
In fact, according to a preferred embodiment, the processing unit comprises a user interface, which provides for an audio signal acquisition unit and means for pairing said audio signals with meaningful words . Such configuration allows creating specific databases for each user, wherein the pairings between their own sounds and the meaningful words are present .
This variant embodiment has a dual value, as well as particularly advantageous aspects.
First of all, it allows a user to calibrate on the basis of his / her disorder the reproduction of output audio signals in order to correspond exactly to what he / she wanted to express, guaranteeing high system customization and user satisfaction.
Secondly, databases and pairing tables which can be saved and reutilized for users having speech disorders similar to those of the user who created such pairing tables are obtained.
Furthermore, since speech disorders change over time, such databases or pairing tables can be modified on the basis of changes in disorder and also allow pairing a single meaningful word to one or more input audio signals, facilitating the use of the system object of the present invention, as well as improving the quality of life of the disabled person.
In addition to the disorder which can vary over time, contingent events may occur causing the variation of audio signals emitted by the disabled person, audio signals that are however aimed at expressing the same word, for example fatigue, agitation, emotion, physical decay are all factors which can cause variation in the expression of the disabled person.
For this reason, pairing more than one sound with a single meaningful word can be a particularly advantageous aspect .
The person assisting the disabled person may also modify the pairing tables or databases, should he / she become aware of a change in the expression of the disabled person.
Finally, the careful analysis of such databases and pairing tables stored within the processing means belonging to the system object of the present invention allows a control on the correctness of the words paired with different sounds.
It is evident that the speaker dependant system contains in the database the possibility of creating a speech vocabulary customized by the user (therefore also vocally incorrect) , linked to a grammatical vocabulary containing meaningful words in text format, already inserted by the manufacturer, while the "TTS" type system contains in its database a correct speech vocabulary, not customizable by the user, yet already standardized by the manufacturer, always linked to a grammatical vocabulary containing meaningful words in text format, also inserted by the manufacturer.
According to a possible embodiment of the system object of the present invention, it is possible to combine in a single database the database of meaningful words in text format, communicating, on the one hand, with the input audio signals in order to pair such words with the sounds emitted by the disabled person and, on the other hand, with the TTS system in order to pair such meaningful words with phonemes which are also meaningful .
It is further specified that the system object of the present invention can involve terms of different languages and, preferably, the user may choose his / her belonging language .
The customization of the database by the user allows to insert within the latter also phrases typical of the user, idioms, nicknames, etc., which would be difficult to recognize, deriving directly and exclusively from the user.
In order to improve the portability features of the system object of the present invention, it is possible to provide the acquisition means, the processing means and the reproduction means to be inserted within a user device .
An actual easily portable and usable recorder / translator for the disabled people is therefore created.
Moreover, such device preferably has communication means with one or more operating units.
For example, the user device can be connected with smartphones, tablets or the like to allow the voice activation of such devices or even simply to be able to perform a call so that the interlocutor may perfectly understand the disabled person.
Furthermore, the user device may have a transmission / reception unit suitable for the wireless transmission / reception of data with one or more remote units.
Such configuration can allow, for example, the updating of databases or software programs loaded within the storage unit .
Given the advantageous aspects of the system previously described, the present invention also relates to an assistance method for users having communicative disorder, comprising the following steps :
a) acquisition of input audio signals emitted by at least one of said users,
c) reproduction of output audio signals. In particular, a step b) relative to the processing of the input audio signals is provided between the acquisition step and the reproduction step.
Such processing step provides for performing a pairing of each input audio signal with at least one meaningful word, after which it provides for performing a generation of an output audio signal relating to the meaningful words .
As previously described, the input audio signals relating to incorrectly pronounced words or poorly comprehensible sounds are processed in order to obtain meaningful words, preferably in written text format.
Such written text is then processed in order to emit sounds based on the meaningful words to result in meaningful speeches .
The pairing step is advantageously created through pairing tables and databases, the generation of which is provided prior to the acquisition of the user audio signals .
Furthermore, such tables are preferably customized for each user.
Finally, according to an embodiment of the method object of the present invention, a step d) relating to the generation of control signals for controlling one or more operating units is provided.
The term "control signals" can mean the actual generation of signals to activate voice-activated devices, or for example also the generation of words that are utilized during a common telephone conversation .
These and other features and advantages of the present invention will become more apparent from the following description of some exemplary embodiments illustrated in the attached drawings wherein:
Figure 1 illustrates a schematic diagram, through functional blocks, of a preferred embodiment of the system object of the present invention;
Figure 2 illustrates a flow chart suitable for describing a possible embodiment of the method object of the present invention.
It is specified that the figures attached to the present patent application report some embodiments of the assistance system and method for disabled people object of the present invention to better understand its advantages and characteristics .
Such embodiments are therefore to be intended merely for illustrative and non-limiting purposes as to the inventive concept of the present invention, namely to allow a disabled person having speech defects to correctly express himself / herself with one or more interlocutors .
With particular reference to Figure 1, the assistance system for disabled users is depicted in a single user device 1, having acquisition means 10 of input audio signals 21 emitted by a disabled user 2 and reproduction means 11 of output audio signals 3.
The acquisition means 10 can consist of, for example, a device of the microphone type or the like, as in the known voice recorders, and are suitable for acquiring and storing the input audio signal 21.
The reproduction means 11 are suitable for generating an output audio signal 3 and can consist of, for example, devices of the speaker type, also provided in the common voice recorders . The input audio signals 21 consist of incorrect words or incomprehensible sounds or hisses for an interlocutor of the user 2.
For this reason, the device 1 comprises processing means 12 which communicate with the acquisition means 10 and with the reproduction means 11.
The processing means 12 comprise at least one storage unit 13 having databases containing pairing tables of the input audio signals with meaningful words .
The processing means 12 are therefore configured in such a way as to pair with each input audio signal 21 one or more meaningful words.
As will be clearly illustrated in Figure 2 relating to an embodiment of the method object of the present invention, the input audio signals are acquired by the acquisition means 10 and stored, either within the storage unit 13 or within the acquisition means themselves .
The processing means 12 then search within the storage unit 13 which meaningful word is paired with the recorded input audio signal 21.
Once the meaningful word has been found, the processing means 12 communicate such word to the reproduction means 11, which have means for generating the output audio signal 3 relating to the meaningful words .
Recognising and identifying the type of input audio signal 21 that is acquired can occur according to one of the ways known in the state of the art, for example it is possible to identify the signal spectrum and to find the shape of the spectrum within the storage unit 13 and to discover with which meaningful word it is paired.
Within the device 1, the communication between the acquisition means 10, the reproduction means 11 and the processing means 13 allows therefore to transform an incomprehensible and phonetically and grammatically incorrect input audio signal 21 into an output audio signal containing only meaningful words .
As anticipated, the device 1 can be integrated with connection means with operating units, for example cables carrying the signal generated by the reproduction means 11 to voice-activated devices.
Similarly, it is possible to provide for transmission / reception units of the device 1 with remote units, for example to perform backups or updates of the storage unit 13.
According to a possible variant embodiment, it is possible to provide for integrating the device 1 with video signal acquisition means, for example a camera, so as to also acquire the gestures and facial expressions of the user 2.
As previously described, the storage unit 13 can be customized by storing the sounds and words pronounced by the user 2.
In a similar way it is therefore possible to store particular gestures or movements of the mouth and pair them with meaningful words .
Storing gestures and facial expressions can synergically work with the acquired input audio signals 21, in a way that it can confirm the correct pairing with meaningful words .
Every time the user 2 wants to indicate a specific word and a precise sound and a precise gesture or movement of the mouth correspond to that word, the correct pairing may be validated whenever both the sound and the gesture stored within the storage unit 13 are present .
The method of operation of the device 1 just described is illustrated in Figure 2.
According to the variant embodiment illustrated in Figure 2 of the assistance method for disabled people object of the present invention, a system mapping step is provided, relating with the creation of pairing tables 60.
Such creation preferably occurs through a recording 601 of sounds emitted by the user and a pairing 602 of such sounds with meaningful words .
The pairing step 602 can for example occur through the disabled person assistant, specifically preinstructed, or by specialized staff who helps the disabled user to correctly express himself / herself, or for example by showing to the user a list of words or images so that he / she can autonomously indicate what he / she meant with the emitted sound.
Such mapping step is obviously particularly important and allows creating schemes, databases and pairing tables customized according to the user.
Once the mapping step is completed, the acquisition step 61 occurs, wherein the acquisition means 10 record the input audio signal 21 emitted by the user.
A processing step 62 is carried out over the input audio signal .
During such processing step, the pairing tables generated in step 60 are utilized: within such pairing tables, all the sounds that the user emits are present and are paired with meaningful words, preferably in text format .
For this reason, the processing step 62 consists of a step 621 wherein the pairing of the input audio signal with a meaningful word in text format occurs .
The processing means, with the assistance of the pairing tables stored within the storage unit 13 and created in step 60, search for the acquired signal among the pre-recorded signals and identify the paired meaningful word.
Once this is done, a step 622 is provided relating to the pairing of such textual word with a sound expressing meaningful words.
It is evident that step 621 is personal and specific for each user, as well as achievable thanks to the mapping previously described.
Step 622, on the other hand, is based on known databases allowing to automatically "read" text words and transform them into phonemes.
Once the step 622 is completed, the actual reproduction 63 of the input audio signal 21 transformed into meaningful words can occur.
Between step 622 and step 63 it is possible to provide for the use of timing means which control how much time elapses between an audio signal and the next one .
In this way it will be possible to verify, in case of too long pauses, the length of the sentences, so as to be able to reproduce the periods in the most fluid way possible, making the conversation even easier.
Finally, it is specified that the device 1 of Figure 1 can be implemented with a display allowing to display the meaningful words in text format resulting from the step 621.
According to such configuration, the device 1 may therefore provide therein a vocabulary that is displayable on the screen, variable on the basis of the nationality language chosen during the initial setting step.
All the words composing the vocabulary will therefore be preferably displayable, with the possibility of displaying them in several order types, and within the chosen order, in several list types.
Advantageously it will also be possible to select each present word, for example through a flag, so as to identify which words belong to that vocabulary have already been linked to an incorrect vocal word, registered by the disabled user.
By clicking on the word, it may be possible to either listen to the recording of the linked word recorded by the disabled user, or to delete the same recording .
In the case of multiple incorrect vocal words recorded by the disabled person linked to a single correct text word, in addition to the main flag for selecting the word, a secondary flag may be provided, suitable for identifying and, possibly, listening to each single recording linked to said meaningful word.
For example, opting for the sorting of only the words not yet selected with the main flag, it will be much simpler and faster to search for those text words not yet linked to recordings of the disabled user.
Preferably, the system object of the present invention provides for the possibility of ordering the words in an order suggested by the manufacturer of the device 1, since already pre-arranged in primary communicative importance lists and possibly suggesting to the disabled user assistant which words are normally the most needed and used on a daily basis .
As previously described, even the words displayed in the chosen order will always be sided by a flag indicating whether they are already linked to the disabled user recording.
While the invention is susceptible of various modifications and alternative constructions, some preferred embodiments have been shown in the drawings and described in detail .
It should be understood, however, that there is no intention to limit the invention to the specific illustrated embodiment, but, on the contrary, it intends to cover all modifications, alternative constructions, and equivalents which fall within the scope of the invention such as defined in the claims .
The use of "for example", "etc.", "or" indicates non-exclusive alternatives without limitation unless otherwise indicated.
The use of "include" means "includes, but is not limited to" unless otherwise stated.

Claims

1. Assistance system (2) for users having communicative disorder comprising acquisition means
(10) of input audio signals (21) emitted by at least one of said users (2) and reproduction means (11) of output audio signals (3) ,
characterized in that
processing means (12) of said input audio signals (21) are present, which processing means (12) are configured in such a way as to pair with each input audio signal (21) one or more meaningful words,
said processing means (12) comprising at least one storage unit (13) within which unit (13) there are databases containing pairing tables of the input audio signals (21) with meaningful words,
comprising said processing means (12) and / or said reproduction means (11) means for generating an output audio signal (3) relating to said meaningful words .
2. System according to claim 1, wherein said processing means (12) comprise a user interface, which user interface comprises an audio signal acquisition unit and means for pairing said audio signals with meaningful words .
3. System according to one or more of the preceding claims, wherein said acquisition means (10) , said processing means (12) and said reproduction means
(11) are inserted within a user device (1) , which user device has communication means with one or more operating units.
4. System according to claim 3, wherein said user device (1) comprises a transmission / reception unit suitable for the wireless transmission / reception of data with one or more remote units.
5. Assistance method for users having communicative disorder, comprising the following steps :
a) acquisition (61) of input audio signals emitted by at least one of said users,
c) reproduction (63) of output audio signals, characterized in that
a step b) relative to the processing (62) of said input audio signals is provided, which step b) comprises the following substeps :
bl) pairing (621) each input audio signal with at least one meaningful word,
b2) generating an output audio signal relating to said meaningful words .
6. Method according to claim 5, wherein a step preceding step a) and relating to the creation (60) of pairing tables of input audio signals with meaningful words is provided.
7. Method according to claim 6, wherein said pairing table creation step (60) is specifically performed for each user.
8. Method according to one or more of the preceding claims, wherein a step d) relating to the generation of control signals for controlling one or more operating units is provided.
PCT/IB2019/058894 2018-10-19 2019-10-18 Assistance system and method for users having communicative disorder WO2020079655A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102018000009607A IT201800009607A1 (en) 2018-10-19 2018-10-19 System and method of help for users with communication disabilities
IT102018000009607 2018-10-19

Publications (1)

Publication Number Publication Date
WO2020079655A1 true WO2020079655A1 (en) 2020-04-23

Family

ID=65409165

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/058894 WO2020079655A1 (en) 2018-10-19 2019-10-18 Assistance system and method for users having communicative disorder

Country Status (2)

Country Link
IT (1) IT201800009607A1 (en)
WO (1) WO2020079655A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6546082B1 (en) * 2000-05-02 2003-04-08 International Business Machines Corporation Method and apparatus for assisting speech and hearing impaired subscribers using the telephone and central office
US20030223455A1 (en) * 2002-05-29 2003-12-04 Electronic Data Systems Corporation Method and system for communication using a portable device
WO2007134494A1 (en) * 2006-05-16 2007-11-29 Zhongwei Huang A computer auxiliary method suitable for multi-languages pronunciation learning system for deaf-mute
US20110208523A1 (en) * 2010-02-22 2011-08-25 Kuo Chien-Hua Voice-to-dactylology conversion method and system
WO2014002349A1 (en) * 2012-06-29 2014-01-03 テルモ株式会社 Information processing device and information processing method
US20150379896A1 (en) * 2013-12-05 2015-12-31 Boe Technology Group Co., Ltd. Intelligent eyewear and control method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6546082B1 (en) * 2000-05-02 2003-04-08 International Business Machines Corporation Method and apparatus for assisting speech and hearing impaired subscribers using the telephone and central office
US20030223455A1 (en) * 2002-05-29 2003-12-04 Electronic Data Systems Corporation Method and system for communication using a portable device
WO2007134494A1 (en) * 2006-05-16 2007-11-29 Zhongwei Huang A computer auxiliary method suitable for multi-languages pronunciation learning system for deaf-mute
US20110208523A1 (en) * 2010-02-22 2011-08-25 Kuo Chien-Hua Voice-to-dactylology conversion method and system
WO2014002349A1 (en) * 2012-06-29 2014-01-03 テルモ株式会社 Information processing device and information processing method
US20150379896A1 (en) * 2013-12-05 2015-12-31 Boe Technology Group Co., Ltd. Intelligent eyewear and control method thereof

Also Published As

Publication number Publication date
IT201800009607A1 (en) 2020-04-19

Similar Documents

Publication Publication Date Title
US10276164B2 (en) Multi-speaker speech recognition correction system
US9053096B2 (en) Language translation based on speaker-related information
US7383182B2 (en) Systems and methods for speech recognition and separate dialect identification
US8560326B2 (en) Voice prompts for use in speech-to-speech translation system
US11539900B2 (en) Caption modification and augmentation systems and methods for use by hearing assisted user
EP1715475A1 (en) Conversation aid-device
US20080140398A1 (en) System and a Method For Representing Unrecognized Words in Speech to Text Conversions as Syllables
KR20030044899A (en) Method and apparatus for a voice controlled foreign language translation device
TW201214413A (en) Modification of speech quality in conversations over voice channels
US20050207543A1 (en) Method and apparatus for voice interactive messaging
US20060190260A1 (en) Selecting an order of elements for a speech synthesis
Stemberger et al. Phonetic transcription for speech-language pathology in the 21st century
CN115148185A (en) Speech synthesis method and device, electronic device and storage medium
US9881611B2 (en) System and method for providing voice communication from textual and pre-recorded responses
JP6832503B2 (en) Information presentation method, information presentation program and information presentation system
CN109616116B (en) Communication system and communication method thereof
CN105913841B (en) Voice recognition method, device and terminal
KR20080097619A (en) Learning system and method by interactive conversation
JP4079275B2 (en) Conversation support device
WO2020079655A1 (en) Assistance system and method for users having communicative disorder
JP2020113150A (en) Voice translation interactive system
US20240096236A1 (en) System for reply generation
JP2006301967A (en) Conversation support device
KR101883365B1 (en) Pronunciation learning system able to be corrected by an expert
Hofmann Intuitive speech interface technology for information exchange tasks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19801420

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19801420

Country of ref document: EP

Kind code of ref document: A1