EP0958570A1 - Device for phonological training - Google Patents

Device for phonological training

Info

Publication number
EP0958570A1
EP0958570A1 EP97952152A EP97952152A EP0958570A1 EP 0958570 A1 EP0958570 A1 EP 0958570A1 EP 97952152 A EP97952152 A EP 97952152A EP 97952152 A EP97952152 A EP 97952152A EP 0958570 A1 EP0958570 A1 EP 0958570A1
Authority
EP
European Patent Office
Prior art keywords
sound
user
presentation
produced
colours
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97952152A
Other languages
German (de)
English (en)
French (fr)
Inventor
Ewa Braun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP0958570A1 publication Critical patent/EP0958570A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking

Definitions

  • the subject invention concerns a device for phonological training, comprising sound reception means, operating means for controlling the device, interpreting and processing means, and presentation means, said presentation means comprising a display screen divided into a plurality of windows for simultaneous presentation of a graphic reproduction of the desired sound as well as of the sound produced by the user and received by the sound reception means, and of an animated reproduction of speech organs.
  • dyslexia The overall problem with which dyslexia patients are faced is to relate sound to the adequate letter.
  • the dyslexic individual who does not automatically achieve developed phonological awareness will lack the fundamental requirements for learning to read and write. He/she will have difficulties in recognising and defining speech sounds.
  • Several studies in this respect support the method involving presentation of the sound by visualisation thereof.
  • dyslexia is often linked with the possession of creative abilities manifesting themselves for instance in excellent visual perception.
  • the method of creating a link between sound and letter by means of visualisation therefore offers a pedagogical possibility of employing the own resources of the dyslexic individual.
  • the method could also advantageously be used in many other connections, among them for elementary school pupils having a particular need for phonological training, for instance pupils having concentration dif- ficulties.
  • the method is equally useful with respect to individuals who have impaired hearing or are deaf.
  • Another possible use of the method is for the rehabilitation of individuals in need of rehabilitation following a stroke or an accident to improve their abilities of speech, reading and writing.
  • the invention relates to a device according to which the above method may be practised automatically, allowing the student to use the device essentially on his own for practising purposes.
  • Such automated speech teaching devices are previous- ly known, e.g. from US 2 533 010.
  • the device described in that publication comprises a sound-recording microphone registering the sound produced by the user.
  • the sound is then presented in graphic form as frequency curves displayed on a screen.
  • a cross section through the oral cavity and a picture of the lips are display to show how the speech organs preferably should be positioned and moved to produce a word, also displayed on the screen.
  • a curve representing a teacher's standard solution also is displayed, allowing the user to compare the curve produced by him with the standard.
  • WO 94/17508 discloses a device comprising a microphone for recording sound, a computer for transforming sound into a curve, and a screen for displaying the curve thus obtained, a curve representing a teacher' s solution and the discrepancy between the two curves.
  • This device is smaller and more manageable than the previous one.
  • the user is not informed of how the sound production/formation is to take place or what changes are required in order to reduce any differences that may exist between the user-produced curve and that of the teacher's solution.
  • the document GB 2 198 871 A discloses a device similar to those described above, with the difference that it allows the user to decide for himself and to have influence on which phonemes he wishes to practise or how they are to be combined to form various words. This is achieved by the user indicating a letter or a combination of letters by means of an operating means.
  • the above-mentioned drawbacks are, however, found also in this device.
  • the present invention has for its object to provide a device for phonological training, by means of which the user will receive guidance on sound formation and production as well as on how changes are to be made in order to achieve the desired result.
  • FIG. 1 illustrates a device in accordance with the invention
  • Fig. 2 is a schematic representation of a division of the screen in accordance with the invention.
  • Fig. 3 is an image of a profile of articulation
  • Fig. 4 is a first example of visualisation of phonemes in accordance with the invention
  • Fig. 5 is a second example of visualisation of phonemes in accordance with the invention.
  • Fig. 6 is a third example of visualisation of phonemes in accordance with the invention.
  • Fig. 1 shows schematically a device in accordance with the invention.
  • the device comprises a sound- reception means 1 which may be e.g. a microphone, a processing means 2 which may be e.g. a computer, and a presentation means 3 which may be e.g. a display screen.
  • the device comprises loud-speakers 4 or the like.
  • the microphone 1 is designed to record the sounds produced by the user.
  • the device also comprises operating means 5, such as a keyboard or the like, by means of which the user can control the device.
  • the display screen 3 is designed to show several different image-presentation windows simultaneously.
  • One example of screen division is shown in Fig. 2.
  • the screen comprises eight different image- presentation windows.
  • the motor function of the mouth i.e. how the lips move, when the desired sound is being pronounced correctly, is re- produced in animated form.
  • the second window 12 is a cross-sectional view of the oral cavity, a so called profile of articulation showing the motions and points of abutment of the tongue, the use of different volumes of air in the mouth and the throat and so on, required to produce the correct pronunciation of the desired sound.
  • Window 13 could show for example the manner in which abdominal support is used in the formation of the desired sound, i.e.
  • Window 14 is divided into two parts, 14a and 14b.
  • the upper window 14a displays a visual representation of the correct pronunciation of the desired sound, as will be described in closer detail in the following.
  • Window 14b below displays a visual representation of the user- produced sound recorded by the microphone 1 and processed by the computer 2. Because of the juxtaposition of these two windows the user may conveniently discern discrepancies between his own pronunciation and the correct pronunciation.
  • Two windows, 15 and 16 show the desired sound in letter form, one window for instance showing one or a few letters at a time whereas the other window shows longer combinations of letters, such as entire sentences or the like.
  • these windows are connected to the operating means 5, allowing the user to input for instance the desired letter combinations in order to receive assistance from the device to pronounce them.
  • the operating means could instead be a computer mouse or similar means and one of the windows could display the alphabet, thus allowing the user to indicate the desired letters.
  • the last window 17, finally, is an operating panel from which the user may select various functional modes.
  • the user may choose a listening mode, selecting one sound which is simultaneously displayed in all windows and possibly also could be listened to at the same time. This could be effected either in real time or in slow motion.
  • the user could thereafter pronounce the sound which is visualised and may be compared with the standard. Sound discrepancies, if outside predetermined tolerance values, result in different pieces of advice to the user as to what changes in his pronunciation are desired in order to achieve the correct sound.
  • the user could instead select a test mode according to which the user produces and registers one sound, one syllable or one word, whereupon this sound is presented to the user together with the standard pronunciation.
  • the visualisation of the sound could be effected in many different ways, depending among other things on the user's present stage of learning.
  • the basis of the visualisation is to create an image of each phoneme along a time axis, giving the various parts of the sound configuration different space depending on the duration of use.
  • the visualisation could be made by means of a frequency spectrum or the like.
  • the preferred visualisation is that based on where the sound is produced, as will be described further on.
  • Speech sounds are produced by creating different volumes in the oral cavity and the throat and by means of motions of articulation.
  • the articulation is based on various combinations of movements of the lower jaw, the body of the tongue, the tip of the tongue, the lips and the larynx. Consequently, it is possible to present speech sounds by means of animated lip movements (window 11 above) and a profile of articulation, i.e. an animated cross-section image of the oral cavity (window 12 above) .
  • Fig. 3 shows a picture of such a profile of articulation.
  • the areas of formation in the oral cavity relat- ing to the various sounds are also indicated. These areas are marked by different colours in the cavity, the colour of the area closest to the lips being reddish brown, the colours gradually changing to red, yellow and green, ending at the throat in blue.
  • the sound "k” is formed in the dark blue area by an attack sound in the rear part of the abutment pipe whereas the sound "b” is formed by the lips being pressed together.
  • short sounds are presented in more intense and saturated colours whereas for example long- vowel sounds are given a less intense and lighter colour tone. It should be mentioned already at this point, however, that the choice of colours could be different without impairing the function of the invention.
  • Fig. 4 illustrates the manner in which the sound- related colouring may be made use of to visualise a sound.
  • Window 11 in Fig. 4 illustrates, like before, the lip movements required to form a desired sound, which in this case is "ak".
  • the formation of the sound in the articulation profile is shown in window 12.
  • Fig. 14a show differently coloured blocks the extensions of which lengthwise denote the duration of the formation of the respective sound, and the colours of which denote where the sound is formed. These colours correspond to the above-mentioned colour areas of the articulation profile. Consequently, the sound "ak" is visualised in Fig.
  • the device may be controlled by the operating means 5 in order to produce the desired speed, the desired sound, the desired level of difficulty of sound combinations, repetition of operations, if desired, and so on.
  • the device is adapted to compare the user-pronounced sound to the standard- pronounced sound and to accept the user-produced sound, should the discrepancies be within predetermined tolerances.
  • the device may be arranged to advise the user as to how and where he needs to change his way of sound formation. Such advice could concern for instance the place of formation of the sound, how the sound is articulated while making use of the shaft of air inside the oral cavity, the abdominal support and so on, the duration of the sound formation, etcetera.
  • the device could also be used in many other ways. For instance, a combination of letters could be indicated, either by the user himself or automatically by the device, whereupon the user may practice pronunciation of the letter combination. Alternatively, the user could instead register sounds in the microphone by speech, sounds which the device interprets and then visually presents the manner in which these sounds are written by means of letters, and possibly also presents a corrected image of the accepted appearance of the sound.
  • a third variety is to display an image of the object.
  • the device in accordance with the invention thus provides the user with an experience, i.e. a visu- alisation of the sound, in which process other parts of the brains are activated than those usually employed in the process of reading. In this manner, weakened areas of the brain could be trained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Rehabilitation Tools (AREA)
EP97952152A 1996-12-27 1997-12-19 Device for phonological training Withdrawn EP0958570A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SE9604800A SE506656C2 (sv) 1996-12-27 1996-12-27 Anordning för fonologisk träning med grafisk återgivning av ljud och talorgan
SE9604800 1996-12-27
PCT/SE1997/002167 WO1998032111A1 (en) 1996-12-27 1997-12-19 Device for phonological training

Publications (1)

Publication Number Publication Date
EP0958570A1 true EP0958570A1 (en) 1999-11-24

Family

ID=20405152

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97952152A Withdrawn EP0958570A1 (en) 1996-12-27 1997-12-19 Device for phonological training

Country Status (4)

Country Link
EP (1) EP0958570A1 (sv)
AU (1) AU5581398A (sv)
SE (1) SE506656C2 (sv)
WO (1) WO1998032111A1 (sv)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2533010A (en) * 1945-06-07 1950-12-05 Joseph E Henabery Method and apparatus for visually determining unknown vibration characteristics
WO1981001478A1 (en) * 1979-11-16 1981-05-28 M Sakai Teaching aid:phoneti-peuter mobile
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5142657A (en) * 1988-03-14 1992-08-25 Kabushiki Kaisha Kawai Gakki Seisakusho Apparatus for drilling pronunciation
US4913539A (en) * 1988-04-04 1990-04-03 New York Institute Of Technology Apparatus and method for lip-synching animation
US5487671A (en) * 1993-01-21 1996-01-30 Dsp Solutions (International) Computerized system for teaching speech

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9832111A1 *

Also Published As

Publication number Publication date
SE9604800L (sv) 1998-01-26
AU5581398A (en) 1998-08-07
WO1998032111A1 (en) 1998-07-23
SE9604800D0 (sv) 1996-12-27
SE506656C2 (sv) 1998-01-26

Similar Documents

Publication Publication Date Title
Lambacher A CALL tool for improving second language acquisition of English consonants by Japanese learners
Elkonin The psychology of mastering the elements of reading
US6151577A (en) Device for phonological training
Tarone et al. The Mirroring Project
Kirkova-Naskova Second language pronunciation: A summary of teaching techniques
Yang The gap between the perception and production of tones by American learners of Mandarin–An intralingual perspective
Khaitov DEVELOPMENT OF PROFESSIONAL-METHODICAL TRAINING OF DEFECTOLOGISTS WORKING WITH CHILDREN WITH HEARING DEFECTS
Elkonin How to teach children to read
Kaltenböck Learner autonomy: a guiding principle in designing a CD-ROM for intonation practice
Behzadi et al. The effect of using two approaches of teaching pronunciation (intuitive-imitative and analytic-linguistic) on speaking fluency among Iranian EFL learners
WO2005066916A1 (es) Sistema, procedimiento, programa de ordenador y conjunto de datos para facilitar el aprendizaje de lenguas mediante la identificación de sonidos
van Maastricht Second Language Prosody: Intonation and rhythm in production and perception
Gusdian et al. The use of Arabic consonant sounds to arrive at English pronunciation: A case study on Indonesian EFL students in tertiary level
Stark et al. Preliminary work with the new Bell Telephone visible speech translator
EP0958570A1 (en) Device for phonological training
Wedin et al. Teachers’ perceptions of letter learning among adults: The case of basic literacy education in Swedish for immigrants
Nguyen et al. Boosting English majors’ ability in pronouncing stressed vowels via Blue Canoe, a mobile-based application: A focus on Vietnamese EFL learners
Öster Computer-based speech therapy using visual feedback with focus on children with profound hearing impairments
Wik et al. Can visualization of internal articulators support speech perception?
Faraj The Effectiveness of Learning Activities in Pronouncing the English Vowel Schwa by EFL College Level Students
Young et al. Teaching Schwa: Using ‘Stuttering’to Improve English Pronunciation
Lacasta Millera Eurythmy Applied to Teaching English as a Foreign Language in Students with Hearing Impairments and Other Disabilities
Zeng Effects of producing pitch gestures on the production of Chinese tones
Thorpe Visual feedback of acoustic voice features in voice training
Yulianti et al. THE ANALYSIS OF ENGLISH EDUCATION STUDENTS'DIFFICULTIES IN PRONOUNCING ENGLISH VOWELS

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19990618

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE ES FR GB IT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20030701