WO2012173582A1 - Using speech synthesis for language training with picture synchronization - Google Patents
Using speech synthesis for language training with picture synchronization Download PDFInfo
- Publication number
- WO2012173582A1 WO2012173582A1 PCT/TR2012/000092 TR2012000092W WO2012173582A1 WO 2012173582 A1 WO2012173582 A1 WO 2012173582A1 TR 2012000092 W TR2012000092 W TR 2012000092W WO 2012173582 A1 WO2012173582 A1 WO 2012173582A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- speech synthesis
- client
- word
- server
- Prior art date
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 51
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
Definitions
- the invention relates to the use of speech synthesis (Text-To-Speech) for language training, by showing a picture expressing the word to be synthesized during synthesis.
- speech synthesis Text-To-Speech
- the invention particularly, relates to the use of speech synthesis with picture synchronization for language training, characterized by the following consecutive steps; the text (word, sentence, paragraph, or the whole textbook) to be synthesized is sent to the server by the client, the server looks for the most suitable picture for each word in the text from picture database in the memory unit, the server sends the most suitable picture found in picture database to the client, the client shows the picture delivered by the server to it synchronically on its screen, together with the pronunciation of the each word in sequence.
- the applications performing speech synthesis are not intended for language learning.
- avatar applications while the word "pen” is being synthesized, not a pen image, but a talking head is shown on the screen.
- TR200505402 is based on learning the words located in the database by showing the meaning and the related picture of the word on different parts of the screen, while pre-recorded audio files are played from the speakers.
- the invention aims language training however, the main difference is, the system does not include speech synthesis, and it can not be used to pronounce any word sequence. Therefore the system can only play the audio for the words present in the database and can not support any given continuous text.
- KR20090132692 also represents the content of a text as pictures, however the system does not include any speech synthesis. Consequently, it does not offer any voice data as pronunciation of the words.
- JP2008129434 offers speech synthesis however the system only reflects atmosphere corresponding to human sensibility such as bright or dark. It has no interest in presenting pictures relevant to text.
- the invention inspired by the current state, aims to solve the above mentioned disadvantages.
- the object of the invention is to use the speech synthesis for language training with picture synchronization.
- a further object of the invention is to show the pictures expressing each word in a continuous text flow separately, not just the phoneme.
- a further object of the invention is to facilitate language training/learning by way of the voice-picture match in the memory of the user, thanks to the fact that the user hears the pronunciation of the synthesized word, as well as seeing the picture illustrating that word on the screen.
- a further object of the invention is to provide the user to understand what is spoken by making the pictures used in speech synthesis appear in the mind of the user, hearing the same word used in speech synthesis, simultaneously.
- a yet further object of the invention is to allow the user to speak a foreign language as well as understanding it, thanks to the voice-picture matches recorded in the mind of the user during speech synthesis.
- a further object of the invention is to facilitate the user to speak the foreign language, by making the voice corresponding to the objects seen during daily life/the concepts encountered/the images made up appear in the mind of the user.
- the invention is a system and method for using speech synthesis in language training, with picture synchronization, including a speech synthesis motor providing feedback to the client about which word to be synthesized instantaneously in a continuous text flow, and being composed of a client synchronically showing the picture delivered to it by the server on the screen along with the pronunciation of the word, a memory unit where picture database is provided, and a server which finds the most suitable picture for the word to be synthesized from the database in the memory unit and sends this picture to the client; wherein the system is characterized by the following steps which are performed by said elements: words to be synthesized is sent to the server by the client respectively, the server looks for the most suitable picture for each word from picture database in the memory unit, the server sends the most suitable pictures found in picture database to the client, the client shows the pictures delivered by the server to it synchronically on its screen, together with the voice pronunciation of each word respectively.
- a speech synthesis motor providing feedback to the client about which word to be synthesized instantaneously in a continuous
- Fig. 1 is a schematic diagram illustrating the components used in the preferred embodiment of the system according to the invention wherein speech synthesis is used in language training with picture synchronization.
- the memory unit (4) is preferably hard disk connected with the server (3) (Fig.1 ).
- the text to be synthesized in the client (2) is provided, or the user (1 ) enters said text or word by himself/herself.
- the user (1) wants to synthesize a text, i.e. when he/she wants to get the pronunciation and what is meant by the words in the text, she/he states through the client (2) that she/he wants to perform said process.
- Feedback is provided to the client (2) about what word to be synthesized instantaneously in a continuous text flow, by means of the speech synthesis motor positioned on the client (2).
- the client (2) sends words in the text to be synthesized to the server (3) separately.
- the server (3) looks for and finds the most suitable picture for each word to be synthesized in picture database (5) in the memory unit (4) connected thereto.
- the server (3) sends the found picture to the client (2) again.
- the client shows the delivered pictures on the screen for the user (1 ), in synchronization with the pronunciation of each word.
- the client (2) may be the same machine as the server (3), while it may be different machine (mobile phone, pda, tablet computer, computer) from the server (3). It is aimed by this flexibility that any user with or without space problems can use the system.
- the system for using speech synthesis in language training with picture synchronization is a system based on showing the picture illustrating the synthesized text. For example, if the synthesized word is "pen”, a pen image will be shown on the screen of the client (2); similarly, if it is "bag”, a bag image will be shown.
- the primary object is language training in the system and method for using speech synthesis in language training with picture synchronization.
- a picture concerning the meaning of the synthesized word is shown.
- the picture to be shown on the screen will be the picture of an object if the word to be synthesized refers to an object, will be a picture expressing a verb if the word is a verb, and will be a picture expressing a sentiment if it is sentiment.
- the picture to be shown on the screen will be independent of the context where the word "pen” is used.
- the same pen image will be displayed on the screen, whether the word "pen” is used in a text expressing happiness, or in a text expressing sadness.
- the system and method for using speech synthesis in language training with picture synchronization directly deal with the meaning of each word appearing in text.
- the system and method for using speech synthesis in language training with picture synchronization operates similarly, for any user who wants to learn English as a foreign language. For instance, while the word “book” (meaning “kitap” in Vietnamese) is being synthesized, the server (3) will find the book image best expressing the meaning of the "book” in picture database (5), and send it to the client (2). The client (2), in turn, will show the book image to the user (1 ) in synchronization with its pronunciation. Thus, while the word "book” is being displayed on the screen, the user will not only hear the pronunciation of the word "book” from the client (2), but she/he will also see the picture best expressing the word "book”, i.e.
- the voice-picture matches recorded in the mind of the user (1 ) during speech synthesis will allow the user to speak a foreign language, as well as understanding it.
- the voice corresponding to the objects the user sees in his/her daily life, the concepts she/he encounters, and the images she/he makes up will appear in the mind of the user (1 ); hence, the user will be able to speak the foreign language.
Abstract
The invention relates to the method for using speech synthesis in language training with picture synchronization, including the process step of providing feedback to the client (2) about what word to be synthesized at that moment in a continuous text flow, by means of the speech synthesis motor; characterized by comprising the following consecutive steps; the text to be synthesized is sent to the server (3) by the client (2), the server (3) looks for the most suitable picture for each word from picture database (5) in the memory unit (4), the server (3) sends the most suitable pictures found in picture database (5) to the client (2), the client (2) shows the pictures delivered by the server (3) to it synchronically on its screen, together with the pronunciation of each word respectively.
Description
DESCRIPTION
USING SPEECH SYNTHESIS FOR LANGUAGE TRAINING WITH PICTURE
SYNCHRONIZATION
Technical Field
The invention relates to the use of speech synthesis (Text-To-Speech) for language training, by showing a picture expressing the word to be synthesized during synthesis.
The invention, particularly, relates to the use of speech synthesis with picture synchronization for language training, characterized by the following consecutive steps; the text (word, sentence, paragraph, or the whole textbook) to be synthesized is sent to the server by the client, the server looks for the most suitable picture for each word in the text from picture database in the memory unit, the server sends the most suitable picture found in picture database to the client, the client shows the picture delivered by the server to it synchronically on its screen, together with the pronunciation of the each word in sequence.
Prior Art
The use of speech synthesis for language training is not a new method. Most people use speech synthesis for learning how to pronounce the words in a foreign language. Since most software highlights the unit of text during synthesis, the user uses such software so as to memorize/learn the pronunciation of the words. However, the user does not learn anything about the meaning of the text synthesized.
In avatar applications used in speech synthesis (WO2008096099), it is aimed to give a facial expression, depending on the characteristic of the phoneme to be
synthesized. Such applications relates to the fact that the avatar takes a sentimental expression according to the structure of the word to be synthesized during speech synthesis. The only thing illustrated on the screen is a "talking head".
Within the state of the art, the applications performing speech synthesis are not intended for language learning. For example, in avatar applications, while the word "pen" is being synthesized, not a pen image, but a talking head is shown on the screen.
Also, the current applications do not offer pronunciation, spelling and the related picture of the word all at once along with a continuous text flow. For example TR200505402 is based on learning the words located in the database by showing the meaning and the related picture of the word on different parts of the screen, while pre-recorded audio files are played from the speakers. The invention aims language training however, the main difference is, the system does not include speech synthesis, and it can not be used to pronounce any word sequence. Therefore the system can only play the audio for the words present in the database and can not support any given continuous text. KR20090132692 also represents the content of a text as pictures, however the system does not include any speech synthesis. Consequently, it does not offer any voice data as pronunciation of the words.
JP2008129434 offers speech synthesis however the system only reflects atmosphere corresponding to human sensibility such as bright or dark. It has no interest in presenting pictures relevant to text.
As a consequence, while speech synthesis is only used for pronunciation in language learning, the present invention is the only method that aims to allow the users to truly understand and speak a foreign language by means of speech synthesis.
Object of the Invention
The invention, inspired by the current state, aims to solve the above mentioned disadvantages.
The object of the invention is to use the speech synthesis for language training with picture synchronization.
A further object of the invention is to show the pictures expressing each word in a continuous text flow separately, not just the phoneme.
A further object of the invention is to facilitate language training/learning by way of the voice-picture match in the memory of the user, thanks to the fact that the user hears the pronunciation of the synthesized word, as well as seeing the picture illustrating that word on the screen.
A further object of the invention is to provide the user to understand what is spoken by making the pictures used in speech synthesis appear in the mind of the user, hearing the same word used in speech synthesis, simultaneously.
A yet further object of the invention is to allow the user to speak a foreign language as well as understanding it, thanks to the voice-picture matches recorded in the mind of the user during speech synthesis. A further object of the invention is to facilitate the user to speak the foreign language, by making the voice corresponding to the objects seen during daily life/the concepts encountered/the images made up appear in the mind of the user. To achieve the above mentioned purposes, the invention is a system and method for using speech synthesis in language training, with picture synchronization, including a speech synthesis motor providing feedback to the
client about which word to be synthesized instantaneously in a continuous text flow, and being composed of a client synchronically showing the picture delivered to it by the server on the screen along with the pronunciation of the word, a memory unit where picture database is provided, and a server which finds the most suitable picture for the word to be synthesized from the database in the memory unit and sends this picture to the client; wherein the system is characterized by the following steps which are performed by said elements: words to be synthesized is sent to the server by the client respectively, the server looks for the most suitable picture for each word from picture database in the memory unit, the server sends the most suitable pictures found in picture database to the client, the client shows the pictures delivered by the server to it synchronically on its screen, together with the voice pronunciation of each word respectively. The structural and characteristic aspects and all the advantages of the present invention will be more clearly understood by means of the following figures, and the detailed description written with references to these figures, and therefore, while making an evaluation, these figures and the detailed description should be taken into account.
The Figures Helping the Invention to Be Understood
Fig. 1 is a schematic diagram illustrating the components used in the preferred embodiment of the system according to the invention wherein speech synthesis is used in language training with picture synchronization.
Description of Part References
1. User
2. Client
3. Server
4. Memory unit
5. Picture database
The drawings do not necessarily need to be scaled, and the details that are not required to understand the present invention may be disregarded. Apart from that, at least substantially identical elements and the elements having at least substantially identical functions are shown with the same reference numeral.
Detailed Description of the Invention In the detailed description, the preferred embodiments and process steps of the system according to the invention wherein speech synthesis is used in language training with picture synchronization will be explained only for the subject to be better understood. In the system according to the invention where speech synthesis is used in language training with picture synchronization, at least one client (2) having at least one speech synthesis motor thereon, at least one server (3), and at least one memory unit (4) where picture database (5) is provided are used (Fig. 1). Speech synthesis motor is positioned on the client (2) and has the same function as that in the prior art speech synthesis systems. The role of the speech synthesis motor is to provide feedback to the client (2) about what word to be synthesized instantaneously in a continuous text flow. Within the system, preferably, mobile phones, pda (pocket computers), tablet computers, or computers are used as client (2) and server (3); and the memory unit (4) is preferably hard disk connected with the server (3) (Fig.1 ). In the method where speech synthesis is used in language training with picture synchronization, first, the text to be synthesized in the client (2) is provided, or the user (1 ) enters said text or word by himself/herself. During speech synthesis, when the user (1) wants to synthesize a text, i.e. when he/she wants to get the pronunciation and what is meant by the words in the text, she/he states through the client (2) that she/he wants to perform said process. Feedback is provided to the client (2) about what word to be synthesized instantaneously in a continuous text flow, by
means of the speech synthesis motor positioned on the client (2). The client (2) sends words in the text to be synthesized to the server (3) separately. The server (3) looks for and finds the most suitable picture for each word to be synthesized in picture database (5) in the memory unit (4) connected thereto. The server (3) sends the found picture to the client (2) again. The client, in turn, shows the delivered pictures on the screen for the user (1 ), in synchronization with the pronunciation of each word.
The client (2) may be the same machine as the server (3), while it may be different machine (mobile phone, pda, tablet computer, computer) from the server (3). It is aimed by this flexibility that any user with or without space problems can use the system.
In the system for using speech synthesis in language training with picture synchronization, since the pictures will be shown during the synthesis of the texts, the system aging is prevented. That is, the user (1 ) will not be shown the same picture list constantly and will not have to listen to a constant announcement. Instead, with the ever-changing world, the user (1) will listen to changing updated texts, and suitable pictures for those texts will be shown. In time, the only thing to be renewed/improved will be picture database (5).
The system for using speech synthesis in language training with picture synchronization is a system based on showing the picture illustrating the synthesized text. For example, if the synthesized word is "pen", a pen image will be shown on the screen of the client (2); similarly, if it is "bag", a bag image will be shown.
Moreover, the primary object is language training in the system and method for using speech synthesis in language training with picture synchronization. In the system, a picture concerning the meaning of the synthesized word is shown. The picture to be shown on the screen will be the picture of an object if the word to be synthesized refers to an object, will be a picture expressing a verb if the
word is a verb, and will be a picture expressing a sentiment if it is sentiment. For example, while the word "pen" is being synthesized, the picture to be shown on the screen will be independent of the context where the word "pen" is used. The same pen image will be displayed on the screen, whether the word "pen" is used in a text expressing happiness, or in a text expressing sadness.
The system and method for using speech synthesis in language training with picture synchronization directly deal with the meaning of each word appearing in text.
The system and method for using speech synthesis in language training with picture synchronization according to the invention operates similarly, for any user who wants to learn English as a foreign language. For instance, while the word "book" (meaning "kitap" in Turkish) is being synthesized, the server (3) will find the book image best expressing the meaning of the "book" in picture database (5), and send it to the client (2). The client (2), in turn, will show the book image to the user (1 ) in synchronization with its pronunciation. Thus, while the word "book" is being displayed on the screen, the user will not only hear the pronunciation of the word "book" from the client (2), but she/he will also see the picture best expressing the word "book", i.e. the book image, on the screen of the client (2), in synchronization with the pronunciation of said word. Thus, the voice-picture matches recorded in the mind of the user (1 ) during speech synthesis will allow the user to speak a foreign language, as well as understanding it. The voice corresponding to the objects the user sees in his/her daily life, the concepts she/he encounters, and the images she/he makes up will appear in the mind of the user (1 ); hence, the user will be able to speak the foreign language.
Claims
The invention is a system for using speech synthesis in language training with picture synchronization, including at least one speech synthesis motor that provides feedback to the client (2) about what word to be synthesized at that moment; characterized in that it comprises at least one client (2) displaying the pictures delivered by the server (3) to the user (1 ), in synchronization with the pronunciation of each word in a continuous text flow, at least one memory unit (4) where picture database (5) is provided,
at least one server (3) that finds the most suitable picture for each word to be synthesized from picture database (5) in the memory unit (4) and sends that picture to the client (2).
The system for using speech synthesis in language training with picture synchronization according to Claim 1 ; characterized in that said client (2) is a mobile phone, a pda (pocket computer), a tablet computer, or a computer.
The system for using speech synthesis in language training with picture synchronization according to Claims 1 and 2; characterized in that said server (3) is a mobile phone, a pda (pocket computer), a tablet computer, or a computer.
The system for using speech synthesis in language training with picture synchronization according to Claims 1 to 3; characterized in that said memory unit (4) is a hard disk.
The invention is a system for using speech synthesis in language training with picture synchronization, including the process step of providing feedback to the client (2) about what word to be synthesized in a continuous text flow, by means of a speech synthesis motor; characterized in that it comprises the following process steps in respective order: the text to be synthesized is sent to the server (3) by the client
(2) ,
the server (3) looks for the most suitable picture for each word in the text, from picture database (5) in the memory unit (4), the server (3) sends the most suitable pictures found in picture database (5) to the client (2),
the client (2) displays the pictures delivered to it by the server
(3) , on its (2) screen in synchronization with the pronunciation of each word.
The method for using speech synthesis in language training with picture synchronization according to Claim 5; characterized in that a mobile phone, a pda (pocket computer), a tablet computer, or a computer may be used as said client (2).
The method for using speech synthesis in language training with picture synchronization according to Claims 5 and 6; characterized in that a mobile phone, a pda (pocket computer), a tablet computer, or a computer may be used as said server (3).
The system for using speech synthesis in language training with picture synchronization according to Claims 5 to 7; characterized in that a hard disk is used as said memory unit (4).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TR2011/05998 | 2011-06-17 | ||
TR201105998 | 2011-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012173582A1 true WO2012173582A1 (en) | 2012-12-20 |
Family
ID=46851572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2012/000092 WO2012173582A1 (en) | 2011-06-17 | 2012-06-06 | Using speech synthesis for language training with picture synchronization |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2012173582A1 (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4695975A (en) * | 1984-10-23 | 1987-09-22 | Profit Technology, Inc. | Multi-image communications system |
US5885083A (en) * | 1996-04-09 | 1999-03-23 | Raytheon Company | System and method for multimodal interactive speech and language training |
WO2001097198A1 (en) * | 2000-06-13 | 2001-12-20 | Bahn Jee Lola Tan | An educational device |
US20060188852A1 (en) * | 2004-12-17 | 2006-08-24 | Gordon Gayle E | Educational devices, systems and methods using optical character recognition |
JP2008129434A (en) | 2006-11-22 | 2008-06-05 | Oki Electric Ind Co Ltd | Voice synthesis server system |
WO2008096099A1 (en) | 2007-02-05 | 2008-08-14 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
KR20090132692A (en) | 2008-06-23 | 2009-12-31 | (주)지아이 | Method and system for converting text to image |
US20110107217A1 (en) * | 2009-10-29 | 2011-05-05 | Margery Kravitz Schwarz | Interactive Storybook System and Method |
-
2012
- 2012-06-06 WO PCT/TR2012/000092 patent/WO2012173582A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4695975A (en) * | 1984-10-23 | 1987-09-22 | Profit Technology, Inc. | Multi-image communications system |
US5885083A (en) * | 1996-04-09 | 1999-03-23 | Raytheon Company | System and method for multimodal interactive speech and language training |
WO2001097198A1 (en) * | 2000-06-13 | 2001-12-20 | Bahn Jee Lola Tan | An educational device |
US20060188852A1 (en) * | 2004-12-17 | 2006-08-24 | Gordon Gayle E | Educational devices, systems and methods using optical character recognition |
JP2008129434A (en) | 2006-11-22 | 2008-06-05 | Oki Electric Ind Co Ltd | Voice synthesis server system |
WO2008096099A1 (en) | 2007-02-05 | 2008-08-14 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
KR20090132692A (en) | 2008-06-23 | 2009-12-31 | (주)지아이 | Method and system for converting text to image |
US20110107217A1 (en) * | 2009-10-29 | 2011-05-05 | Margery Kravitz Schwarz | Interactive Storybook System and Method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100798153B1 (en) | Language learning contents providing system using image parts | |
US9478143B1 (en) | Providing assistance to read electronic books | |
US20090012788A1 (en) | Sign language translation system | |
WO2001045088A1 (en) | Electronic translator for assisting communications | |
WO2014160316A2 (en) | Device, method, and graphical user interface for a group reading environment | |
WO2014151884A2 (en) | Device, method, and graphical user interface for a group reading environment | |
JP2018097250A (en) | Language learning device | |
Seligman et al. | 12 Advances in Speech-to-Speech Translation Technologies | |
Hanson | Computing technologies for deaf and hard of hearing users | |
KR20140087956A (en) | Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data | |
JP6656529B2 (en) | Foreign language conversation training system | |
CN111160051B (en) | Data processing method, device, electronic equipment and storage medium | |
Ellis | A purposeful rebuilding: YouTube, representation, accessibility and the socio-political space of disability | |
WO2012173582A1 (en) | Using speech synthesis for language training with picture synchronization | |
KR101681673B1 (en) | English trainning method and system based on sound classification in internet | |
JP6838739B2 (en) | Recent memory support device | |
KR20140073768A (en) | Apparatus and method for language education by using native speaker's pronunciation data and thoughtunit | |
Millett | Improving accessibility with captioning: An overview of the current state of technology | |
KR101632055B1 (en) | Chinese education system and method | |
JP2019179081A (en) | Conference support device, conference support control method, and program | |
KR20140082127A (en) | Apparatus and method for learning word by using native speaker's pronunciation data and origin of a word | |
Hersh | Deaf people’s experiences, attitudes and requirements of contextual subtitles: A two-country survey | |
Tunold | Captioning for the DHH | |
KR20140109551A (en) | Apparatus and method for learning foreign language by using augmented reality | |
KR20050119809A (en) | Contents file generation method for language learnning and studying method using the file |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12759283 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12759283 Country of ref document: EP Kind code of ref document: A1 |