EP1745466A2 - Improved control text reading device for the visually impaired - Google Patents

Improved control text reading device for the visually impaired

Info

Publication number
EP1745466A2
EP1745466A2 EP05762443A EP05762443A EP1745466A2 EP 1745466 A2 EP1745466 A2 EP 1745466A2 EP 05762443 A EP05762443 A EP 05762443A EP 05762443 A EP05762443 A EP 05762443A EP 1745466 A2 EP1745466 A2 EP 1745466A2
Authority
EP
European Patent Office
Prior art keywords
text
textual data
data
group
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05762443A
Other languages
German (de)
French (fr)
Inventor
Claude Liard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CNAM Conservatoire National des Arts et Metiers
Original Assignee
CNAM Conservatoire National des Arts et Metiers
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CNAM Conservatoire National des Arts et Metiers filed Critical CNAM Conservatoire National des Arts et Metiers
Publication of EP1745466A2 publication Critical patent/EP1745466A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • G09B21/006Teaching or communicating with blind persons using audible presentation of the information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems

Definitions

  • the invention relates to the rendering of spoken language under the control of manual devices, in particular for the blind or visually impaired.
  • the 'One is to use a Braille display ephemeral and the second based on the use of a voice synthesizer text.
  • the representation is spatial in the first case and temporal in the second.
  • the disadvantage of the Braiiie device is above all to be very expensive without forgetting that the user must master the Braille representation, which is far from being the case for all blind people, in particular the elderly.
  • a text-to-speech device has a much more reasonable cost of around a factor of 5 compared to ephemeral Braille.
  • the invention aims, in general, to propose a voice reading device with manual control, with which it is easier to move in the text while being aware of the reading location where one is at a time considered.
  • a device for sound reproduction of textual data comprising a manual control interface associated with means for addressing the textual data in a group of such textual data as well as means for sound emission, characterized in that the sound emission means are stereophonic means also including means for sound positioning of the text data in space, according to a positioning representative of the position of the text data in said group of text data .
  • FIG. 1 shows a device according to a preferred variant of the invention
  • - Figure 2 is a block diagram of the constitution of this same device
  • - Figure 3 is a block diagram of a device according to a variant of the invention, of autonomous type.
  • the device of FIG. 1 essentially comprises three hardware components.
  • the first hardware component is a couple of speakers
  • the second hardware component is a touch sensor 20, in the form of a ruler extending horizontally in front of the user and intended to be traversed by the latter's finger to provide a signal representative of the position of the finger thereon.
  • the third hardware component is a computer screen 30 of the usual type.
  • the control and processing means associated with these three hardware components are, themselves, represented in FIG. 2. Among these there is a control electronics 100 directly coupled to the sensitive rule 20. This control electronics 100 is directly connected to a processor 200 whose role is to exploit the signals delivered with a view to transforming them into control signals.
  • This processor (here in the form of a computer) thus controls a text synthesizer 300 whose role is to transform the textual data supplied to it by the microprocessor 200 into speech pronunciation signals.
  • a stereophonic module 400 whose role is to position the speech pronunciation in the space surrounding the user, here at a location chosen on a line connecting the two speakers 10, 12.
  • the computer 100 also controls a video screen 500 so that the text being spoken is displayed thereon.
  • the movement of the finger on the sensitive rule 20 is exploited so as to transform this movement into a pronunciation of a text, in a manner which is not only audible but also restored. stereophonically chosen.
  • the function of the sensitive rule 20 is to allow the user to move around in the text to be spoken.
  • a line is being pronounced, and your position of the finger identified on the sensitive rule is transformed into an identification of the word to be pronounced in the line considered, in accordance with the choice of the mode of selected reading 32 (word).
  • the speed of pronunciation can be increased or decreased depending on the speed of movement of the finger on the sensitive sensor.
  • this device In addition to manual movement in the text (here the sentence), this device allows the user to feel a displacement of sound type, faithful to the positioning of the word in the sentence.
  • the text element, here the word, thus identified thanks to the sensitive rule is restored by voice to a position which is located between the two loudspeakers, and which corresponds to its occupied position in the portion of text considered (here the line considered).
  • the sensitive rule and the microprocessor 200 can also address another type of textual data within another type of text portion. Thus they can address a character in a word or in a sentence, or, on a larger scale, a line or a sentence in a paragraph or in a page.
  • a corresponding control member is positioned to the left of the sensitive rule, in the form of three keys 31, 32 and 33 corresponding respectively to the mode "Character”, in "word” mode and in "line” mode.
  • a control member is positioned on the right of the sensitive rule, in the form of two keys 35 and 36 corresponding respectively to the rise and to the descent within a paragraph.
  • SSV stereophonic virtual sound source
  • This positioning proves to be decisive for the comfort and the efficiency of reading, moreover it makes it possible to modulate the move speed within text and move quickly to a desired portion of text. It also allows the user to more easily organize his movement within the text, in particular to anticipate the arrival at the end of the line in order to prepare to actuate the key 36 making it possible to move to the next line.
  • This device is in particular intended for the visually impaired or blind people and can, as a variant (FIG. 3) be produced without a viewing screen, with the only reading interface being the sound reading device constituted by the sensitive sensor 20, the text synthesizer 300 and the stereophonic module 400.
  • the movement of the finger also preferably generates the movement of a cursor on the line.
  • the user of a computer makes it possible in particular to search for particular portions of text, for example to reread previously heard passages.
  • the data present in the computer 200 typically of the alphanumeric type, can however be stored in a simpler electronic microprocessor device 200 (FIG. 3).
  • the stored data is updated by the microprocessor 200.
  • the device alternatively comprises means using the sensitive rule to also control other functionalities.
  • the sensitive rule 20 is advantageously used to interpret a pressure force applied to the latter. If there is medium pressure on the ruler at a given position, this action repeats the last element of text spoken.
  • the device includes means for visually signaling the textual data addressed within the group of displayed textual data.
  • the device is designed to authorize a displacement of the text data transmitted under manual control (20) in the opposite direction to the usual reading direction.

Abstract

The invention relates to a sound reproduction device for text data, comprising a manual control interface (20) associated with means (200) for addressing a text datum in a group of text data, in addition to sound means, characterized in that the manual control interface can provide information on position and the pressure force of the finger thereon and the sound emission means (10, 12, 300, 400) are stereophonic means also including sound positioning means (400) for the text data in space according to representative positioning of the position of the text data in the text data group and according to said information.

Description

Dispositif de lecture de texte pour non voyant à commande améliorée Text reading device for visually impaired with improved control
L'invention concerne la restitution de langage parlé sous la commande de dispositifs manueis, notamment pour personnes mal ou non- voyantes. Il existe actuellement deux approches principales pour lire un texte sur un écran d'ordinateur. La'première consiste à utiliser un afficheur Braille éphémère et la seconde repose sur l'emploi d'un synthétiseur vocal de texte. La représentation est spatiale dans le premier cas et temporelle dans îe second. L'inconvénient du dispositif Braiiie est surtout d'être très coûteux sans oublier que l'utilisateur doit maîtriser la représentation Braille ce qui est loin d'être le cas de toutes les personnes aveugles notamment les personnes âgées. Un dispositif à synthèse vocale de texte a un coût beaucoup plus raisonnable d'environ un facteur 5 par rapport au Braille éphémère. L'inconvénient des dispositifs à synthèse de texte réside dans la saturation auditive de l'utilisateur. En effet, le système vocalise toutes les informations présentes à l'écran sans avoir la possibilité de filtrer facilement les informations qui n'ont pas d'intérêt pour l'utilisateur. Un tel dispositif à synthèse vocal a notamment été proposé dans le document FR 2 612 312 où la commande du dispositif s'effectuait à l'aide d'un clavier. Outre l'inconvénient en termes de saturation auditive, de tels dispositifs à synthèse vocale ne permettent que difficilement de se repérer intellectuellement à l'intérieur d'un texte. En d'autres termes, la personne malvoyante se contente, avec ces dispositifs, d'une lecture linéaire et continuelle du texte, sans savoir où elle se trouve à l'intérieur du texte en question. Une modulation de la vitesse de lecture en fonction de l'intérêt pour telle ou telle partie du texte est, de plus, pratiquement impossible pour l'utilisateur. L'invention vise, de manière générale, à proposer un dispositif de lecture vocale à commande manuelle, avec lequel il soit plus aisé de se déplacer dans le texte tout en gardant conscience de l'emplacement de lecture où l'on se trouve à un moment considéré. Ce but est atteint selon l'invention grâce à un un dispositif de restitution sonore de données textuelles, comprenant une interface de commande manuelle associée à des moyens d'adressage de la donnée textuelle dans un groupe de telles données textuelles ainsi que des moyens d'émission sonore, caractérisé en ce que les moyens d'émission sonore sont des moyens stéréophoniques incluant également des moyens de positionnement sonore de la donnée textuelle dans l'espace, selon un positionnement représentatif de la position de la donnée textuelle dans ledit groupe de données textuelles. D'autres caractéristiques, buts et avantages de l'invention apparaîtront à la lecture de la description détaillée qui va suivre, faite en référence aux figures annexées sur lesquelles : - la figure 1 représente un dispositif conforme à une variante préférentielle de l'invention ; - la figure 2 est un schéma fonctionnel de constitution de ce même dispositif ; - la figure 3 est un schéma fonctionnel d'un dispositif conforme à une variante de l'invention, de type autonome. Le dispositif de la figure 1 comporte essentiellement trois composantes matérielles. La première composante matérielle est un couple de haut-parleursThe invention relates to the rendering of spoken language under the control of manual devices, in particular for the blind or visually impaired. There are currently two main approaches for reading text on a computer screen. The 'One is to use a Braille display ephemeral and the second based on the use of a voice synthesizer text. The representation is spatial in the first case and temporal in the second. The disadvantage of the Braiiie device is above all to be very expensive without forgetting that the user must master the Braille representation, which is far from being the case for all blind people, in particular the elderly. A text-to-speech device has a much more reasonable cost of around a factor of 5 compared to ephemeral Braille. The disadvantage of text-to-speech devices lies in the hearing impairment of the user. Indeed, the system vocalizes all the information present on the screen without having the possibility of easily filtering the information which is not of interest to the user. Such a voice synthesis device has in particular been proposed in the document FR 2 612 312 where the control of the device was carried out using a keyboard. In addition to the drawback in terms of auditory saturation, such speech synthesis devices make it difficult to identify intellectually within a text. In other words, the visually impaired person is satisfied, with these devices, with a linear and continuous reading of the text, without knowing where he is inside the text in question. A modulation of the speed of reading according to the interest for such or such part of the text is, moreover, practically impossible for the user. The invention aims, in general, to propose a voice reading device with manual control, with which it is easier to move in the text while being aware of the reading location where one is at a time considered. This object is achieved according to the invention thanks to a device for sound reproduction of textual data, comprising a manual control interface associated with means for addressing the textual data in a group of such textual data as well as means for sound emission, characterized in that the sound emission means are stereophonic means also including means for sound positioning of the text data in space, according to a positioning representative of the position of the text data in said group of text data . Other characteristics, objects and advantages of the invention will appear on reading the detailed description which follows, made with reference to the appended figures in which: - Figure 1 shows a device according to a preferred variant of the invention; - Figure 2 is a block diagram of the constitution of this same device; - Figure 3 is a block diagram of a device according to a variant of the invention, of autonomous type. The device of FIG. 1 essentially comprises three hardware components. The first hardware component is a couple of speakers
10, 12 positionnés de part et d'autre du reste du dispositif. La deuxième composante matérielle est un capteur tactile 20, sous la forme d'une règle s'etendant horizontalement devant l'utilisateur et destinée à être parcourue par le doigt de ce dernier pour fournir un signal représentatif de la position du doigt sur celle-ci. La troisième composante matérielle est un écran d'ordinateur 30 de type habituel. Les moyens de commande et de traitement associés à ces trois composantes matérielles sont, eux, représentés à la figure 2. Parmi ceux-ci on distingue une électronique de commande 100 couplée directement à la règle sensitive 20. Cette électronique de commande 100 est directement reliée à un processeur 200 dont le rôle est d'exploiter les signaux délivrés en vue de transformer ceux-ci en des signaux de commande. Ce processeur (ici sous la forme d'un ordinateur) commande ainsi un synthétiseur de texte 300 dont le rôle est de transformer les données textuelles, fournies à lui par le microprocesseur 200, en signaux de prononciation vocale. Entre le synthétiseur de texte 300 et les haut-parleurs 10, 12 précédemment mentionnés se trouve un module stéréophonique 400 dont le rôle est de positionner la prononciation vocale dans l'espace avoisinant l'utilisateur, ici à un emplacement choisi sur une ligne reliant les deux haut- parleurs 10, 12. Outre le synthétiseur de texte 300 et le module stéréophonique 400, l'ordinateur 100 commande également un écran vidéo 500 de telle manière que soit affiché sur ce dernier le texte en cours de prononciation. A partir d'une telle organisation des différents modules en présence, le déplacement du doigt sur la règle sensitive 20 est exploité de manière à transformer ce déplacement en une prononciation d'un texte, d'une manière qui soit non seulement sonore mais également restituée de manière stéréophoniquement choisie. La fonction de la règle sensitive 20 est de permettre à l'utilisateur de se déplacer dans le texte à énoncer. Ainsi, dans l'exemple illustré sur la figure 1, une ligne est en cours de prononciation, et ta position du doigt identifiée sur la règle sensitive est transformée en une identification du mot à prononcer dans la ligne considérée, conformément au choix du mode de lecture sélectionné 32 (mot). La vitesse de la prononciation peut être augmentée ou diminuée en fonction de la vitesse de déplacement du doigt sur le capteur sensitif. Outre le déplacement manuel dans le texte (ici la phrase), le présent dispositif permet à l'utilisateur de ressentir un déplacement de type sonore, fidèle au positionnement du mot dans la phrase. L'élément textuel, ici le mot, ainsi identifié grâce à la règle sensitive, est restitué vocalement à une position qui est située entre les deux haut- parleurs, et qui correspond à sa position occupée dans la portion de texte considéré (ici la ligne considérée). La règle sensitive et le microprocesseur 200 peuvent également adresser un autre type de donnée textuelle au sein d'un autre type de portion de texte. Ainsi ils peuvent adresser un caractère dans un mot ou dans une phrase, ou, à plus grande échelle, une ligne ou une phrase dans un paragraphe ou dans une page. Dans la présente variante, pour passer à un mode de lecture de type caractère, mot, ou ligne, un organe de commande correspondant est positionné à gauche de la règle sensitive, sous la forme de trois touches 31 , 32 et 33 correspondant respectivement au mode « caractère », au mode « mot » et au mode « ligne ». Pour le passage d'une ligne à l'autre au sein d'un paragraphe, un organe de commande est positionné, lui, à droite de la règle sensitive, sous la forme de deux touches 35 et 36 correspondant respectivement à la montée et à la descente au sein d'un paragraphe. Ainsi, le dispositif restitue une image sonore qui, par son positionnement stéréophonique, illustre à l'utilisateur le positionnement de l'élément de texte par rapport au reste du texte matérialisé par la règle sensitive. Grâce à l'émission stéréophonique, le texte jouit d'une représentation à la fois spatiale et temporelle. L'élément sonore généré constitue au final une source sonore virtuelle stéréophonique (SSV), générée ici par le synthétiseur de texte 300. Ce positionnement s'avère être déterminant pour le confort et l'efficacité de lecture, de plus il permet de moduler la vitesse de déplacement au sein du texte et de se déplacer rapidement à une portion de texte souhaitée. Il permet en outre à l'utilisateur d'organiser plus aisément son déplacement au sein du texte, notamment d'anticiper l'arrivée en fin de ligne afin de se préparer à actionner la touche 36 permettant de passer à la ligne suivante. Ce dispositif est notamment destiné aux personnes malvoyantes ou non voyantes et peut, en variante (figure 3) être réalisé sans écran de visionnage, avec pour seule interface de lecture le dispositif de lecture sonore constitué par le capteur sensitif 20, le synthétiseur de texte 300 et le module stéréophonique 400. Dans le cas de la présence d'un écran d'affichage 500, le déplacement du doigt génère également préférentiellement le déplacement d'un curseur sur la ligne. L'utilisateur d'un ordinateur permet notamment de rechercher des portions de texte particulières, par exemple pour relire des passages précédemment écoutés. Les données présentes dans l'ordinateur 200, typiquement de type alphanumériques, peuvent toutefois être stockées dans un dispositif électronique à microprocesseur 200 (figure 3) plus simple. Dans ce cas, comme dans le cas de l'ordinateur, les données stockées sont mises à jour par le microprocesseur 200. Le dispositif comporte en variante des moyens utilisant la règle sensitive pour piloter également d'autres fonctionnalités. Ainsi, la règle sensitive 20 est avantageusement mise à profit pour interpréter une force de pression appliquée sur cette dernière. En cas de pression moyennement forte sur la règle à une position donnée, cette action provoque la répétition du dernier élément de texte prononcé. En cas de pression fortement appuyée sur la règle, cette dernière provoque alors l'accès à un menu contextuel et/ou environnementale ou toute autre fonctionnalité. Avantageusement, le dispositif inclut des moyens pour signaler visuellement la donnée textuelle adressée au sein du groupe de données textuelles affichées. Avantageusement, le dispositif est prévu pour autoriser un déplacement de la donnée textuelle émise sous commande manuelle (20) en sens inverse du sens de lecture habituel. 10, 12 positioned on either side of the rest of the device. The second hardware component is a touch sensor 20, in the form of a ruler extending horizontally in front of the user and intended to be traversed by the latter's finger to provide a signal representative of the position of the finger thereon. . The third hardware component is a computer screen 30 of the usual type. The control and processing means associated with these three hardware components are, themselves, represented in FIG. 2. Among these there is a control electronics 100 directly coupled to the sensitive rule 20. This control electronics 100 is directly connected to a processor 200 whose role is to exploit the signals delivered with a view to transforming them into control signals. This processor (here in the form of a computer) thus controls a text synthesizer 300 whose role is to transform the textual data supplied to it by the microprocessor 200 into speech pronunciation signals. Between the text synthesizer 300 and the speakers 10, 12 previously mentioned is a stereophonic module 400 whose role is to position the speech pronunciation in the space surrounding the user, here at a location chosen on a line connecting the two speakers 10, 12. In addition to the text synthesizer 300 and the stereophonic module 400, the computer 100 also controls a video screen 500 so that the text being spoken is displayed thereon. From such an organization of the different modules present, the movement of the finger on the sensitive rule 20 is exploited so as to transform this movement into a pronunciation of a text, in a manner which is not only audible but also restored. stereophonically chosen. The function of the sensitive rule 20 is to allow the user to move around in the text to be spoken. Thus, in the example illustrated in FIG. 1, a line is being pronounced, and your position of the finger identified on the sensitive rule is transformed into an identification of the word to be pronounced in the line considered, in accordance with the choice of the mode of selected reading 32 (word). The speed of pronunciation can be increased or decreased depending on the speed of movement of the finger on the sensitive sensor. In addition to manual movement in the text (here the sentence), this device allows the user to feel a displacement of sound type, faithful to the positioning of the word in the sentence. The text element, here the word, thus identified thanks to the sensitive rule, is restored by voice to a position which is located between the two loudspeakers, and which corresponds to its occupied position in the portion of text considered (here the line considered). The sensitive rule and the microprocessor 200 can also address another type of textual data within another type of text portion. Thus they can address a character in a word or in a sentence, or, on a larger scale, a line or a sentence in a paragraph or in a page. In the present variant, to switch to a character, word or line type reading mode, a corresponding control member is positioned to the left of the sensitive rule, in the form of three keys 31, 32 and 33 corresponding respectively to the mode "Character", in "word" mode and in "line" mode. For the passage from one line to another within a paragraph, a control member is positioned on the right of the sensitive rule, in the form of two keys 35 and 36 corresponding respectively to the rise and to the descent within a paragraph. Thus, the device reproduces a sound image which, by its stereophonic positioning, illustrates to the user the positioning of the text element relative to the rest of the text materialized by the sensitive rule. Thanks to the stereophonic emission, the text enjoys a representation at the same time spatial and temporal. The sound element generated ultimately constitutes a stereophonic virtual sound source (SSV), generated here by the text synthesizer 300. This positioning proves to be decisive for the comfort and the efficiency of reading, moreover it makes it possible to modulate the move speed within text and move quickly to a desired portion of text. It also allows the user to more easily organize his movement within the text, in particular to anticipate the arrival at the end of the line in order to prepare to actuate the key 36 making it possible to move to the next line. This device is in particular intended for the visually impaired or blind people and can, as a variant (FIG. 3) be produced without a viewing screen, with the only reading interface being the sound reading device constituted by the sensitive sensor 20, the text synthesizer 300 and the stereophonic module 400. In the case of the presence of a display screen 500, the movement of the finger also preferably generates the movement of a cursor on the line. The user of a computer makes it possible in particular to search for particular portions of text, for example to reread previously heard passages. The data present in the computer 200, typically of the alphanumeric type, can however be stored in a simpler electronic microprocessor device 200 (FIG. 3). In this case, as in the case of the computer, the stored data is updated by the microprocessor 200. The device alternatively comprises means using the sensitive rule to also control other functionalities. Thus, the sensitive rule 20 is advantageously used to interpret a pressure force applied to the latter. If there is medium pressure on the ruler at a given position, this action repeats the last element of text spoken. In the event of pressure strongly pressed on the rule, the latter then causes access to a contextual and / or environmental menu or any other functionality. Advantageously, the device includes means for visually signaling the textual data addressed within the group of displayed textual data. Advantageously, the device is designed to authorize a displacement of the text data transmitted under manual control (20) in the opposite direction to the usual reading direction.

Claims

REVENDICATIONS
1. Dispositif de restitution sonore de données textuelles, comprenant une interface de commande manuelle (20) associée à des moyens d'adressage (200) d'une donnée textuelle dans un groupe de telles données textuelles, ainsi que des moyens d'émission sonore, caractérisé en ce que l'interface de commande manuelle est apte à fournir des informations de position et de force de pression de doigt sur elle-même et les moyens d'émission sonore (10, 12, 300, 400) sont des moyens stéréophoniques incluant également des moyens de positionnement sonore (400) de la donnée textuelle dans l'espace, selon un positionnement représentatif de la position de la donnée textuelle dans ledit groupe de données textuelles et selon lesdites informations. 1. Sound reproduction device for textual data, comprising a manual control interface (20) associated with addressing means (200) of textual data in a group of such textual data, as well as sound transmission means , characterized in that the manual control interface is capable of providing position and finger pressure force information on itself and the sound emission means (10, 12, 300, 400) are stereophonic means also including means for sound positioning (400) of the textual data in space, according to a positioning representative of the position of the textual data in said group of textual data and according to said information.
2. Dispositif selon la revendication 1, caractérisé en ce que l'interface de commande manuelle (20) est une règle sensitive. 2. Device according to claim 1, characterized in that the manual control interface (20) is a sensitive rule.
3. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que les informations de force de pression sont aptes à provoquer la répétition sonore d'une donnée textuelle et/ou l'accès à un menu contextuel et/ou environnemental. 3. Device according to any one of the preceding claims, characterized in that the pressure force information is capable of causing the sound repetition of a text data and / or access to a contextual and / or environmental menu.
4. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que la donnée textuelle adressée est un mot et le groupe de données textuelles est une ligne de texte. 4. Device according to any one of the preceding claims, characterized in that the textual data addressed is a word and the group of textual data is a line of text.
5. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que la donnée textuelle adressée est une ligne de texte, et le groupe de données textuelles est un groupe de lignes de texte. 5. Device according to any one of the preceding claims, characterized in that the data addressed text is a line of text, and the text data group is a group of text lines.
6. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que la donnée textuelle adressée est un caractère et en ce que les moyens de positionnement stéréophonique (10, 12, 300, 400) reproduisent la position de ce caractère au sein d'un mot, d'une ligne ou d'une phrase donnée. 6. Device according to any one of the preceding claims, characterized in that the textual data addressed is a character and in that the stereophonic positioning means (10, 12, 300, 400) reproduce the position of this character within d 'a given word, line or phrase.
7. Dispositif selon l'une quelconque des revendi- cations précédentes, caractérisé en ce qu'il comprend un écran d'ordinateur (500) sur lequel est affiché le groupe de données textuelles et la donnée textuelle adressée . 7. Device according to any one of the preceding claims, characterized in that it comprises a computer screen (500) on which is displayed the group of textual data and the addressed textual data.
8. Dispositif selon la revendication précédente, caractérisé en ce qu'il inclut des moyens (200, 300) pour signaler visuellement la donnée textuelle adressée au sein du groupe de données textuelles affichées. 8. Device according to the preceding claim, characterized in that it includes means (200, 300) for visually signaling the textual data addressed within the group of displayed textual data.
9. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce qu'il comprend des moyens de sélection (31, 32, 33) entre différents modes de lecture, parmi lesquels un mode caractère, un mode mot et un mode ligne ou phrase, où lesdites données textuelles adressées sont respectivement un caractère, un mot et une ligne ou phrase. 9. Device according to any one of the preceding claims, characterized in that it comprises means for selecting (31, 32, 33) between different reading modes, among which a character mode, a word mode and a line mode or sentence, where said addressed text data is respectively a character, a word and a line or sentence.
10. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que le dispositif d'émission stéréophonique (10, 12, 300, 400) comprend deux haut-parleurs (10, 12) disposés sensiblement sur deux côtés de l'utilisateur. 10. Device according to any one of the preceding claims, characterized in that the stereophonic emission device (10, 12, 300, 400) comprises two loudspeakers (10, 12) arranged substantially on two sides of the user .
11. Dispositif selon l'une quelconque des revendications précédentes, caractérisé en ce que le dispositif est prévu pour autoriser un déplacement de la donnée textuelle émise sous commande manuelle (20) en sens inverse du sens de lecture habituel . 11. Device according to any one of the preceding claims, characterized in that the device is designed to authorize a movement of the data text sent under manual control (20) in the opposite direction to the usual reading direction.
EP05762443A 2004-04-27 2005-04-27 Improved control text reading device for the visually impaired Withdrawn EP1745466A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0404438A FR2869443B1 (en) 2004-04-27 2004-04-27 NON-LED TEXT READING DEVICE WITH IMPROVED CONTROL.
PCT/FR2005/001038 WO2005106845A2 (en) 2004-04-27 2005-04-27 Improved control text reading device for the visually impaired

Publications (1)

Publication Number Publication Date
EP1745466A2 true EP1745466A2 (en) 2007-01-24

Family

ID=34945556

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05762443A Withdrawn EP1745466A2 (en) 2004-04-27 2005-04-27 Improved control text reading device for the visually impaired

Country Status (3)

Country Link
EP (1) EP1745466A2 (en)
FR (1) FR2869443B1 (en)
WO (1) WO2005106845A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11081100B2 (en) * 2016-08-17 2021-08-03 Sony Corporation Sound processing device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2612312B1 (en) * 1987-03-11 1992-01-03 Inst Nat Sante Rech Med AUDIO-DIGITAL AND TOUCH COMPONENT AND PORTABLE COMPUTING DEVICE HAVING APPLICATION

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005106845A3 *

Also Published As

Publication number Publication date
FR2869443B1 (en) 2006-11-17
WO2005106845A2 (en) 2005-11-10
FR2869443A1 (en) 2005-10-28
WO2005106845A3 (en) 2006-02-09

Similar Documents

Publication Publication Date Title
FR2884023A1 (en) DEVICE FOR COMMUNICATION BY PERSONS WITH DISABILITIES OF SPEECH AND / OR HEARING
US8645121B2 (en) Language translation of visual and audio input
US20140337037A1 (en) Systems and Methods for Speech Command Processing
Robitaille The illustrated guide to assistive technology and devices: Tools and gadgets for living independently
US20060257827A1 (en) Method and apparatus to individualize content in an augmentative and alternative communication device
FR2885251A1 (en) Character, shape, color and luminance recognition device for e.g. blind person, has control unit permitting blind person to select color or character information for reading text, identifying type of document, luminance, color and shape
WO2009071795A1 (en) Automatic simultaneous interpretation system
Ellis What Counts as Scholarship in Communication? An Autoethnographic Response.
KR102251832B1 (en) Electronic device and method thereof for providing translation service
EP3412036B1 (en) Method for assisting a hearing-impaired person in following a conversation
EP1745466A2 (en) Improved control text reading device for the visually impaired
EP3149968A1 (en) Method for assisting with following a conversation for a hearing-impaired person
FR2899097A1 (en) Hearing-impaired person helping system for understanding and learning oral language, has system transmitting sound data transcription to display device, to be displayed in field of person so that person observes movements and transcription
JP7057455B2 (en) Programs, information processing methods, terminals
CA2178925A1 (en) Method and device for converting a first voice message in a first language into a second message in a predetermined second language
JP2019153160A (en) Digital signage device and program
FR2788151A1 (en) Mobile phone mouse converter for display screen for interpretation of maps has phone used as a mouse for movement over the map
FR2901396A1 (en) PORTABLE OR INTERACTIVE AND UNIVERSAL VOCAL OR NON-VOICE COMMUNICATION DEVICE FOR DEFICIENTS OR DISABLED PERSONS OF THE WORD AND MUTE
Hakkinen et al. Effective communication of warnings and critical information: application of accessible design methods to auditory warnings
FR2702582A1 (en) Portable apparatus delivering voice messages, especially for commenting upon the works in an exhibition
FR2860613A3 (en) Digital photograph display method for computer, displays digital image in frame, and plays associated digitally stored music data
FR2791620A1 (en) Selective interface for audible alarms on vehicle, comprises event detector, generator of audible warnings, database storing pre-recorded audible warnings and means to select warnings for events
Matthiessen Seeing and hearing directly
Caves et al. Interface Design
WO2002045052A1 (en) Multipurpose portable phonic device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061109

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): BE DE DK ES GB IT

RBV Designated contracting states (corrected)

Designated state(s): BE DE DK ES GB IT

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): BE DE DK ES GB IT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20081101