SG187533A1 - Accompaniment and voice matching method for word learning music file - Google Patents

Accompaniment and voice matching method for word learning music file Download PDF

Info

Publication number
SG187533A1
SG187533A1 SG2012090635A SG2012090635A SG187533A1 SG 187533 A1 SG187533 A1 SG 187533A1 SG 2012090635 A SG2012090635 A SG 2012090635A SG 2012090635 A SG2012090635 A SG 2012090635A SG 187533 A1 SG187533 A1 SG 187533A1
Authority
SG
Singapore
Prior art keywords
beat
voice
word
weak
strong
Prior art date
Application number
SG2012090635A
Inventor
Sang Cheol Park
Original Assignee
Sang Cheol Park
Amosedu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sang Cheol Park, Amosedu Co Ltd filed Critical Sang Cheol Park
Publication of SG187533A1 publication Critical patent/SG187533A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/062Combinations of audio and printed presentations, e.g. magnetically striped cards, talking books, magnetic tapes with printed texts thereon
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/022Demisyllables, biphones or triphones being the recognition units

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Machine Translation (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

]This disclosure relates to an accompaniment and voice matching method for a word learning music file, and more particularly, to a method for matching accompaniment having a four-four time rhythm composed of four5 beats of strong/weak/middle/weak with an accent of a word (voice).In the accompaniment and voice matching method for a word learning music file according to the present disclosure, a word voice composed of an original language voice and a translation voice is matched with accompaniment in four-four time where a single bar is composed of four beats of strong (the first10 beat)/weak (the second beat)/middle (the third beat)/weak (the fourth beat), the matching is performed so that a single word corresponds to the single bar, the word is classified into a strong beat type having an accent on a first syllable and a weak beat type not having an accent on the first syllable, the word of the strong beat type is matched so that the original language voice is located at the15 first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar, and the word of the weak beat type is matched so that the original language voice is located at the fourth beat of a previous bar and the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar. 16

Description

. MGA I oescron WHERE [Invention Title] Lo ERE
ACCOMPANIMENT AND VOICE MATCHING METHOD FOR WORD
LEARNING MUSIC FILE
[Technical Field]
This disclosure relates to an accompaniment and voice matching method for a word learning music file, and more particularly, to a method for matching accompaniment having a four-four time rhythm composed of four ) 10 beats of strong/weak/middle/weak with an accent of a word (voice). [Background Art]
Along with the globalization of modern societies, foreign languages such as English, Japanese and Chinese have important parts of the social life.
Accordingly, languages are educated from early ages, and various methods have been developed for effective language education. For example, when learning a foreign language, learners generally may directly register as a member of a foreign language educational institute and attend a foreign language lecture, learn a language through movies or TV dramas, or use a foreign language learning method using a pop song from a radio program.
However, the development of the vocabulary which is the basis of language learning dominantly tends to depend on the personal endeavor, and the effect of word learning is not great since learners feel tired and bored while repeatedly listening to the same word.
For example, in a conventional English word pronunciation learning method, phonetic symbols are marked based on English words (alphabet) or pronunciation is marked with a mother tongue (e.g., Korean alphabet) so that the English word may be pronounced by using the correlation between the “word (alphabet)” and the “sound (pronunciation)’. But, this method has a limit in its learning effect.
Meanwhile, in order to enhance the word learning effect, there has been disclosed a learning method where accompaniment and words (voices) are composed to make a music file so that a learner learns the words while ’ repeating the music without feeling bored.
However, in the learning method using a music file, since accents (beats) of the accompaniment do not agree with actual accents of the words, which is harsh to the ear, so the words are not easily input to the brain and their information is not efficiently delivered, resulting in deteriorated learning effect. [Disclosure] : [Technical Problem] : This disclosure is directed to providing an accompaniment and voice matching method for a word learning music file, which matches accompaniment having a four-four time rhythm composed of four beats of strong/weak/middie/weak with an accent of a word (voice), thereby allowing a learner to automatically learn the accent of the word unconsciously without being harsh to the ear since the accent of the accompaniment agrees with the actual accent of the word and also enhancing the learning effect due to excellent information transfer capability. [Technical Solution]
In one general aspect, there is provided an accompaniment and voice matching method for a word learning music file, wherein a word voice composed of an original language voice and a translation voice is matched with accompaniment in four-four time where a single bar is composed of four beats of strong (the first beat)/weak (the second beat)/middle (the third beat)/weak . 10 (the fourth beat), wherein the matching is performed so that a single word corresponds to the single bar, : wherein the word is classified into a strong beat type having an accent on a first syllable and a weak beat type not having an accent on the first syllable, } wherein the word of the strong beat type is matched so that the original language voice is located at the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar, and wherein the word of the weak beat type is matched so that the original language voice is located at the fourth beat of a previous bar and the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar. [Advantageous Effects]
According to the present disclosure, by matching accompaniment having a four-four time rhythm composed of four beats of strong/weak/middle/weak with an accent of a word (voice), the accent of the word may be automatically learned unconsciously without being harsh to the : ear since the accent of the accompaniment agrees with the actual accent of the word, and the learning effect may be enhanced due to excellent information transfer capability. [Description of Drawings] . 10 The above and other aspects, features and advantages of the disclosed exemplary embodiments will be more apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
Fig. 1 is a diagram for illustrating a matching according to an embodiment of the present disclosure;
Figs. 2a and 2b are diagrams for illustrating a matching according to another embodiment of the present disclosure; and
Figs. 3a and 5b show examples of a screen actually output in a case where a text image is matched according to another embodiment of the present disclosure. [Best Mode]
Hereinafter, configuration and operations of an embodiment of the present disclosure will be described with reference to the accompanying . drawings.
in the following description, a word to be learned is assumed as an
English word, for example.
In the present disclosure, accompaniment to be matched with a word (voice) is in four-four time. In the four-four time, a single bar has four quarter notes and is composed of four beats whose basic stream is strong/weak/middie/weak.
The term “strong” represents a strong beat, the term “weak” represents a weak beat, and the term “middle” represents a middle strong beat.
Meanwhile, an English word has an accent (stress) at a syllable, and so the syllable with an accent should be pronounced strongly.
Table 1 below shows types of English words having syllables. [Table 1] oak em
Cs
Weak-4 | Strong-2 syllable | a-bi-o-ge-net-ic [éibaioud3enétik] —
As shown in Table 1, English words may be classified into words of a strong beat type having an accent at the first syllable, such as bell, rooster, abnegate and temporary, and words of a weak beat type having an accent at a } syllable other than the first syllable, such as tonight, underbuy, abiogenic and abiogenetic.
In addition, the strong beat type words may be classified into strong-1 syllable such as bell, strong-2 syllable such as rooster, strong-3 syllable such as abnegate, strong-4 syllable such as temporary or the like according to the number of syllables.
If having an accent at the second syllable (weak-1 syllable), according to the number of syllables after the second syllable, the weak beat type word may be classified into weak-1 syllable/strong-1 syllable such as tonight, weak-1 syllable/strong-2 syllable such as utensil, weak-1 syllable/strong-3 syllable such as unanimous, weak-1 syllable/strong-4 syllable such as contemporary or the like.
In addition, if having an accent at the third syllable (weak-2 syllable), according to the number of syllables after the third syllable, the weak beat type word may be classified into weak-2 syllable/strong-1 syllable such as underbuy, weak-2 syllable/strong-2 syllable such as universal, weak-2 syllable/strong-3 syllable such as university or the like.
In addition, if having an accent at the fourth syllable (weak-3 syllable), according to the number of syllables after the fourth syllable, the weak beat type word may be classified into weak-3 syllable/strong-2 syllable such as abiogenic, weak-3 syllable/strong-3 syllable such as abiochemistry or the like.
In addition, if having an accent at the fifth syllable (weak-4 syliable), according to the number of syllables after the fifth syllable, the weak beat type } 10 word may be classified into weak-4 syllable/strong-2 syllable such as ) abiogenetic or the like.
As described, words may be classified into the strong beat type and the weak beat type depending on whether or not the first syllable has an accent.
In addition, the weak beat type words may be classified into weak-n syllable and strong-n syllable according to the number of syllables after the syllable without an accent and the number of syllables after the syllable with an accent.
Moreover, the strong beat type words may be classified into strong-n syllable according to the number of syllables after the first syllable with an accent.
At this time, as described above, it can be understood that nis 1 to 4.
Fig. 1 is a diagram for illustrating an accompaniment and voice matching method according to an embodiment of the present disclosure.
As shown in Fig. 1, a single bar in four-four time is composed of four beats, among which the first beat is strong, the second beat is weak, the third beat is middle strong, and the fourth beat is weak, in a basic stream.
The bar in four-four time is repeated many times to compose single accompaniment. In the present disclosure, single accompaniment is composed of about thirty bars. in addition, a single bar corresponds to a single voice (an original language voice or a translation voice). A strong beat type word having an accent at the first syllable is matched so that the original language voice is located at the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar.
In other words, as shown in Fig. 1, in case of strong beat type words } 10 such as bell, rooster, abnegate and temporary, an original language voice of bell, } rooster, abnegate and temporary is matched with the first beat of the corresponding bar, and its translation voice is matched to the third beat of the corresponding bar.
The first beat matched with the original language voice is divided again into four equal parts (a single beat is composed of four sixteenth notes) and matched so that the strong-1 syllable is located at the 1/4 section of the first beat, the strong-2 syllable is located at the 1/4 region and the 2/4 region of the first beat, the strong-3 syllable is located at the 1/4 section to the 4/3 section of the first beat, and the strong-4 syllable is located at the 1/4 section to the 4/4 section of the first beat.
In other words, the first beat of the corresponding bar is divided into four equal parts and matched to occupy corresponding sections of the first beat, divided into four equal parts, from the front as much as the number of corresponding strong-n syllables of the original language voice.
in regard to the weak beat type word not having an accent at the first syllable, the strong-n syllable of the original language voice is matched with the first beat of the corresponding bar, the weak-n syllable of the original language voice is matched with the fourth beat of the previous bar, and the translation voice is matched with the third beat of the corresponding bar.
The syllable having an accent is matched to be located at the 1/4 section of the first beat, and the other syllables of the original language voice are matched with the first beat of the corresponding bar and the fourth of the previous bar on the basis of the syllable having an accent. ] 10 In other words, the strong-2 syllable is matched to correspond to the 1/4 ’ section and the 2/4 section of the first beat, the strong-3 syllable is matched to correspond to the 1/4 section to the 3/4 section of the first beat, the strong-4 syllable is matched to correspond to the 1/4 section to 4/4 section of the first beat, the weak-1 syllable is matched to correspond to the 4/4 section of the fourth beat of the previous bar, the weak-2 syllable is matched to correspond to the 3/4 section and the 4/4 section of the fourth beat of the previous bar, the weak-3 syllable is matched to correspond to the 2/4 section to the 4/4 section of the fourth beat of the previous bar, and the weak-4 syllable is matched to correspond to the 1/4 section to the 4/4 section. in more detail, as shown in Fig. 1, among weak beat type words, a word having weak-1 syllable/strong-n syllable is matched so that the weak-1 syllable of the original language voice corresponds to the 4/4 section of the fourth beat of the previous bar. and the strong-n syllable of the original language voice is matched like the strong beat type described above.
Similarly, a word having weak-2 syllable/strong-n syllable is matched so that the weak-2 syllable of the original language voice corresponds to the 3/4 section and the 4/4 section of the fourth beat of the previous bar, and the strong-n syllable of the original language voice is matched like the strong beat type described above.
In this way, the original language voice of the weak-n syllable is matched with the fourth beat of the previous bar, the original language voice of the strong-n syllable is matched with the first beat of the corresponding bar, and the translation voice is matched with the third beat of the corresponding bar. ] 10 In other words, the first beat of the corresponding bar is divided into four } equal parts and matched to occupy corresponding sections of the first beat, divided into four equal parts, from the front as much as the number of corresponding strong-n syllables of the original language voice, and the fourth beat of the previous bar is divided into four equal parts and matched to occupy 15 corresponding sections of the fourth beat, divided into four equal parts, from the rear as much as the number of corresponding weak-n syllables of the original language voice.
Figs. 2a and 2b are diagrams for illustrating a matching method 20 according to another embodiment of the present disclosure, where a text (image) is matched together with accompaniment and voice.
The accompaniment and the voice are matched as shown in Fig. 1. In the text, an original language text is matched to output from the fourth beat of the previous bar to the third beat of the corresponding bar, and a translation text is matched to output to the second beat and the third beat of the corresponding bar.
Accordingly, after a text of a word is output on the screen, voice of the corresponding word is output. in the present disclosure, if a text (image) is output first, voice is output later after 0.5 to 0.7 second.
Figs. 3a to 5b show examples of a screen actually output in a case where a text (image) is matched according to another embodiment of the present disclosure. } 10 In the present disclosure, if an original language text is output to the ’ screen, a translation text is not newly output after the original language text disappears, but the translation text is output to overlap the original language text.
Accordingly, the original language voice and the translation voice are played to be matched with a text or image of a word.
While the exemplary embodiments have been shown and described, it will be understood by those skilled in the art that various changes in form and details may be made thereto without departing from the spirit and scope of this disclosure as defined by the appended claims.

Claims (1)

  1. [CLAIMS]
    [Claim 1] An accompaniment and voice matching method for a word learning music file, wherein a word voice composed of an original language voice and a translation voice is matched with accompaniment in four-four time where a single bar is composed of four beats of strong (the first beat)/weak (the second beat)/middle (the third beat)/weak (the fourth beat), wherein the matching is performed so that a single word corresponds to : 10 the single bar, wherein the word is classified into a strong beat type having an accent on a first syllable and a weak beat type not having an accent on the first syllable, wherein the word of the strong beat type is matched so that the original language voice is located at the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar, and wherein the word of the weak beat type is matched so that the original language voice is located at the fourth beat of a previous bar and the first beat of the corresponding bar and the translation voice is located at the third beat of the corresponding bar.
    [Claim 2] The accompaniment and voice matching method for a word learning music file according to claim 1, / wherein the word of the strong beat type is classified into strong-n (n=1- 4) syllables according to the number of syllables after the first syllable, the first beat of the corresponding bar is divided into four equal parts, and the word of the strong beat type is matched to occupy corresponding sections of the first beat, divided into four equal parts, from the front as much as the number of corresponding strong-n syllables of the original language voice. ’
    [Claim 3] . 10 The accompaniment and voice matching method for a word learning music file according to claim 1, wherein the word of the weak beat type is classified into strong-n (n=1- 4) syllables and weak-n (n=1-4) syllables according to the number of syllables after the syllable without an accent and the number of syllables after the syllable with an accent, and the weak-n syllables are matched to correspond to the fourth beat of the previous bar and the strong-n syllables are matched to correspond to the first beat of the corresponding bar.
    [Claim 4] The accompaniment and voice matching method for a word learning music file according to claim 3, wherein the first beat of the corresponding bar is divided into four equal parts and matched to occupy corresponding sections of the first beat, divided into four equal parts, from the front as much as the number of corresponding strong-n syllables of the original language voice.
    [Claim 5] The accompaniment and voice matching method for a word learning music file according to claim 3, wherein the fourth beat of the previous bar is divided into four equal parts and matched to occupy corresponding sections of the fourth beat, divided into four equal parts, from the rear as much as the number of corresponding weak-n syllables of the original language voice. _ 10 ) [Claim 6] The accompaniment and voice matching method for a word learning music file according to any one of claims 1to 5, wherein, in a text (image) of the word, an original language text is matched to output from the fourth beat of the previous bar to the third beat of the corresponding bar on a screen, and a translation text is matched to output to the second beat and the third beat of the corresponding bar on the screen.
    [Claim 7] The accompaniment and voice matching method for a word learning music file according to claim 6, wherein the matching is performed so that the word voice is output on the screen later after 0.5 to 0.7 second when the text of the word is output on the screen.
    [Claim 8] The accompaniment and voice matching method for a word learning music file according to any one of claims 1 to 5, wherein the original language voice, the translation voice and the text or image of the word are matched and generated.
SG2012090635A 2011-07-07 2012-07-05 Accompaniment and voice matching method for word learning music file SG187533A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110067192A KR101112422B1 (en) 2011-07-07 2011-07-07 Matching mehod of voice and accompaniment
PCT/KR2012/005331 WO2013005997A2 (en) 2011-07-07 2012-07-05 Method for matching accompaniment to voice for word study music file

Publications (1)

Publication Number Publication Date
SG187533A1 true SG187533A1 (en) 2013-03-28

Family

ID=45840190

Family Applications (1)

Application Number Title Priority Date Filing Date
SG2012090635A SG187533A1 (en) 2011-07-07 2012-07-05 Accompaniment and voice matching method for word learning music file

Country Status (5)

Country Link
JP (1) JP2014500525A (en)
KR (1) KR101112422B1 (en)
CN (1) CN103221987A (en)
SG (1) SG187533A1 (en)
WO (1) WO2013005997A2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280206B (en) * 2014-06-23 2018-08-07 广东小天才科技有限公司 A kind of playback method of audio, device
CN107247768A (en) * 2017-06-05 2017-10-13 北京智能管家科技有限公司 Method for ordering song by voice, device, terminal and storage medium
KR20190046312A (en) 2017-10-26 2019-05-07 주식회사 앰버스 Apparatus and method to improve english skills
KR20210015064A (en) * 2019-07-31 2021-02-10 삼성전자주식회사 Electronic device and method for controlling the same, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3507005B2 (en) * 2000-05-18 2004-03-15 明宏 川村 Foreign language learning support device
KR20050007017A (en) * 2003-07-10 2005-01-17 남창우 The method for realizing language teaching system on periodical play with music
KR20060028839A (en) * 2004-09-30 2006-04-04 박순복 The learning method of the english language spontaneously paraphrased by association through meaning and sound, and the recorded and the electronic media using the method
JP5168239B2 (en) * 2009-06-30 2013-03-21 ブラザー工業株式会社 Distribution apparatus and distribution method
KR101025665B1 (en) * 2009-10-16 2011-03-30 박상철 Method and device for music-based language training

Also Published As

Publication number Publication date
KR101112422B1 (en) 2012-02-27
WO2013005997A2 (en) 2013-01-10
JP2014500525A (en) 2014-01-09
WO2013005997A3 (en) 2013-04-11
CN103221987A (en) 2013-07-24

Similar Documents

Publication Publication Date Title
CN1938756A (en) Prosodic speech text codes and their use in computerized speech systems
Maekawa Production and perception of'paralinguistic'information
Rallabandi et al. On Building Mixed Lingual Speech Synthesis Systems.
SG187533A1 (en) Accompaniment and voice matching method for word learning music file
Downing On pitch lowering not linked to voicing: Nguni and Shona group depressors
Gilbert De-scribing orality: performance and the recuperation of voice
Vijayakrishnan The grammar of Carnatic music
Nor et al. Lexical features of Malaysian English in a local English-language movie, Ah Lok Café
US20090291419A1 (en) System of sound representaion and pronunciation techniques for english and other european languages
Goble Music or musics? An important matter at hand
Pyshkin et al. Multimodal modeling of the mora-timed rhythm of Japanese and its application to computer-assisted pronunciation training
Mohd Nasir et al. Phonological nativisation of Malaysian English in the cartoon animation series “Upin and Ipin: the helping heroes”
KR20140047838A (en) Hangul education system based on character-generative principles of cheonjyin
Tu et al. Error patterns of Mandarin disyllabic tones by Japanese learners
KR20170060759A (en) Text for learning a foreign language
Nelson Konnakkol Manual: An Advanced Course in Solkattu
Latham Listening to Modernism: New Books in the History of Sound
KR101669408B1 (en) Apparatus and method for reading foreign language
Rojczyk Vowel Quality and Duration as a Cue to Word Stress for Non-native Listeners: Polish Listeners’ Perception of Stress in English
Charoy Accommodation to non-native accented speech: Is perceptual recalibration involved?
Nadeem et al. Stress out of stress: stressing unaccented syllables dilemma
Kouega RP and the Cameroon English accent: An overview
Juan-Checa Comparing phonetic difficulties by EFL learners from Spain and Japan.
Cheek et al. Perfect Italian Diction for Singers: An Authoritative Guide
Odé Communicative functions and prosodic labelling of three Russian pitch accents