WO2013172707A4 - Automated system for training oral language proficiency - Google Patents

Automated system for training oral language proficiency Download PDF

Info

Publication number
WO2013172707A4
WO2013172707A4 PCT/NL2013/050356 NL2013050356W WO2013172707A4 WO 2013172707 A4 WO2013172707 A4 WO 2013172707A4 NL 2013050356 W NL2013050356 W NL 2013050356W WO 2013172707 A4 WO2013172707 A4 WO 2013172707A4
Authority
WO
WIPO (PCT)
Prior art keywords
english
language
input
pronunciation
providing
Prior art date
Application number
PCT/NL2013/050356
Other languages
French (fr)
Other versions
WO2013172707A3 (en
WO2013172707A2 (en
Inventor
Wilhelmus Albertus Johannes STRIK
Catia CUCCHIARINI
Original Assignee
Stichting Katholieke Universiteit
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stichting Katholieke Universiteit filed Critical Stichting Katholieke Universiteit
Publication of WO2013172707A2 publication Critical patent/WO2013172707A2/en
Publication of WO2013172707A3 publication Critical patent/WO2013172707A3/en
Publication of WO2013172707A4 publication Critical patent/WO2013172707A4/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention is in the field of automated systems and methods for improving of oral language proficiency. As a result of increasing internationalization there is a growing demand from education and business community for people who speak foreign languages well. An intelligible pronunciation is regarded as important for e.g. successful inter- action and social acceptance. However, an important problem is that oral proficiency training requires so much time, feedback and practice, that very often it cannot be sufficiently provided in traditional language classes. Thereto an automated system is provided.

Claims

AMENDED CLAIMS received by the International Bureau on 27 January 2014 (27.01.2014)
1. Automated system for assisting real time training oral language proficiency of a user in a non-native language, comprising:
a) at least one. means for receiving target audio input, such as a microphone,
c) a processor for capturing and processing input and providing output, such as a computer,
d) at least One means for providing output to the user, such as a speaker for providing audio feedback and a monitor for providing visual feedback,
characterised in that
b) stored on the system
i) first phase speech recognition software for determining audio input in a tolerant mode, wherein the input is in the form of a word,, a sentence, and the like, wherein a typical length of the present input is
preferably 10-250 phonemes, such as 50-100 phonemes, the first phase speech recognition software providing input to second phase speech recognition software, and
ii) second phase speech recognition software for determining audio input in a strict mode, comprising a
pronunciation quality evaluation unit for processing input to determine potential difference between stored target
pronunciation and actual audio input pronunciation, and for generating feedback output,
wherein the pronunciation quality evaluation unit is adapted for one or more varieties and/or dialects, such as British English, American English, Australian English, Canadian
English, New Zealandian English, Indian English, Limburgs, Brabants, Gronings, and Drents, and
further comprising stored on the system
iii) a pronunciation error detector.
2. System according to claim 1, further comprising stored on the system one or more of b)
iv) a word stress error detector,
v) a morphology error detector, vi) a syntax error detector,
vii) an interaction error detector,
viii) an intonation error detector,
ix) a respiration error detector,
x) a formant error detector, and
xi) a selector for selecting a first phase speech . recognition software version and/or a second phase speech recognition software version, the version (s) being optimized for a group of users,
and/or wherein
■input and/or output are in a second language and the user being native in a first language,
wherein the first and second language are selected from Indo-European' languages, such as Spanish, English, Hindi, Portuguese, Bengali, Russian, German, Marathi, French, Italian, Punjabi, Urdu, Dutch, German, French, Spanish, Italian,
Sino-Tibetan languages, such as Chinese,
Austro-Asiatic languages,
Austronesian languages,
Altaic languages,
such as wherein the first and second language are Dutch and English, Dutch and German, Dutch and Spanish, Dutch and Chinese, German and English, French and English, Chinese and English,
preferably wherein the second language is English, and vice versa,
wherein the first and second language are optionally the same, such as Dutch and Dutch.
3. System according to any of claims 1-2, wherein the pronunciation quality evaluation unit comprises software, wherein the software is preferably being stored on a computer.
4. System according to any of claims 1-3, further comprising one or more of a- language model, a lexicon/ a phoneme model, one or more thresholds, one or more probability criteria, one or more random number generators, a level adjustment set-up, and a decoder.
5. System according to any of claims 1-4, further comprising one or more of a reference set of parameters, a fine-tuning' mechanism, a self-learning algorithm^, a self-improvement, algorithm, a selection means for selecting criteria,
a data base, wherein data is stored for one or more of
pronunciation, word stress, intonation, and phoneme
segmentation.
6. System according, to any of claims 1-5, further comprising one or more decision trees, such as a decision tree being adapted to provide questions and responses thereto, and a decision tree being adapted to provide purposive training in view of second phase- speech recognition.
7. Method for assisting automatic real time improvement of oral language proficiency using a system according to any of claims 1-6, comprising the steps of:
a) providing target audio' input to a microphone,
b) processing input with speech recognition software, c) wherein a computer is used for processing input and output,
d) providing feed-back, such as audio feed-back by a speaker, visual feedback by a monitor, and
e) providing automatic real time feed-back aimed at pronunciation improvement by a pronunciation quality evaluation unit.
8.. Method according to claim 7, further providing a
standardized score of oral language proficiency.
9. Method according to (ahy of■ claims 7-8, further monitoring scores of users and relation between one or more users in a
' sequence of users.
10. System according to any of claims 1-6 and/or a method according to any of claims 8-10 for improving a non-mother language.
11. System according to any of claims 1-6 and 10 and/or a method according to any of claims 8-10 for use in medicine, such as in clinical or pre-clinical care.
12. System or method according to claim 11 for treating dysarthria, e.g. caused by CVA, a brain tumor, an accident, ALS (Amyotrophic Lateral Sclerosis) , a neurological disease, such as Parkinson's Disease, a disorder associated with the motoric nerve system, such as in logopedia, for improving eating performance, improving control of organs, such as tongue, for improving intelligibility, audibility, naturalness, and/or efficiency of vocal communication.
PCT/NL2013/050356 2012-05-14 2013-05-14 Automated system for training oral language proficiency WO2013172707A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2008809 2012-05-14
NL2008809A NL2008809C2 (en) 2012-05-14 2012-05-14 Automated system for training oral language proficiency.

Publications (3)

Publication Number Publication Date
WO2013172707A2 WO2013172707A2 (en) 2013-11-21
WO2013172707A3 WO2013172707A3 (en) 2014-01-16
WO2013172707A4 true WO2013172707A4 (en) 2014-03-13

Family

ID=48485402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2013/050356 WO2013172707A2 (en) 2012-05-14 2013-05-14 Automated system for training oral language proficiency

Country Status (2)

Country Link
NL (1) NL2008809C2 (en)
WO (1) WO2013172707A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307162A (en) * 2020-02-26 2021-02-02 北京字节跳动网络技术有限公司 Method and device for information interaction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9223066D0 (en) * 1992-11-04 1992-12-16 Secr Defence Children's speech training aid
US7624013B2 (en) * 2004-09-10 2009-11-24 Scientific Learning Corporation Word competition models in voice recognition

Also Published As

Publication number Publication date
NL2008809C2 (en) 2013-11-18
WO2013172707A3 (en) 2014-01-16
WO2013172707A2 (en) 2013-11-21

Similar Documents

Publication Publication Date Title
Bragg et al. A large inclusive study of human listening rates
Escudero et al. The effect of vowel inventory and acoustic properties in Salento Italian learners of Southern British English vowels
Ulbrich et al. When prosody kicks in: The intricate interplay between segments and prosody in perceptions of foreign accent
Jin et al. Intelligibility of American English vowels and consonants spoken by international students in the United States
Brekelmans et al. Does high variability training improve the learning of non-native phoneme contrasts over low variability training? A replication
Chen et al. Large-scale characterization of Mandarin pronunciation errors made by native speakers of European languages.
Le et al. Modeling pronunciation, rhythm, and intonation for automatic assessment of speech quality in aphasia rehabilitation
Bruggeman et al. No L1 privilege in talker adaptation
Kim et al. Familiarization effects on word intelligibility in dysarthric speech
Pillot-Loiseau et al. French/y/-/u/contrast in Japanese learners with/without ultrasound feedback: vowels, non-words and words
Pellegrini et al. Automatic assessment of speech capability loss in disordered speech
Bruggeman Nativeness, dominance, and the flexibility of listening to spoken language
WO2013172707A4 (en) Automated system for training oral language proficiency
Senior et al. Liu vs. Liu vs. Luke? Name influence on voice recall
Duan et al. Efficient learning of articulatory models based on multi-label training and label correction for pronunciation learning
Gabor-Siatkowska et al. Therapeutic Spoken Dialogue System in Clinical Settings: Initial Experiments
Ford The status of voiceless nasals in Ikema Ryukyuan
ŠIMÁČKOVÁ Czech accent in English: Linguistics and biometric speech technologies
Michot et al. Error-preserving Automatic Speech Recognition of Young English Learners' Language
Valenzuela et al. Production of English vowel contrasts in Spanish L1 learners: A longitudinal study
Guan Emerging modes of temporal coordination: Mandarin and non-native consonant clusters
Bartkova et al. Using multilingual units for improved modeling of pronunciation variants
Cai Interlocutor modelling in comprehending speech from interleaved interlocutors of different dialectic backgrounds
Rykova et al. Linguistic and extralinguistic factors in automatic speech recognition of German atypical speech
Boer et al. Language-dependenc of/s

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13724902

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13724902

Country of ref document: EP

Kind code of ref document: A2