WO2011071267A2 - 영어속독을 위한 동적 영문그래픽 상영방법 - Google Patents
영어속독을 위한 동적 영문그래픽 상영방법 Download PDFInfo
- Publication number
- WO2011071267A2 WO2011071267A2 PCT/KR2010/008503 KR2010008503W WO2011071267A2 WO 2011071267 A2 WO2011071267 A2 WO 2011071267A2 KR 2010008503 W KR2010008503 W KR 2010008503W WO 2011071267 A2 WO2011071267 A2 WO 2011071267A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sentence
- language element
- dynamic
- graphic
- merge
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 38
- 230000003068 static effect Effects 0.000 claims abstract description 19
- 238000012216 screening Methods 0.000 claims description 20
- 230000033001 locomotion Effects 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 2
- 210000004556 brain Anatomy 0.000 abstract description 17
- 210000001328 optic nerve Anatomy 0.000 abstract description 3
- 230000016776 visual perception Effects 0.000 abstract 1
- 238000013519 translation Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 5
- 230000018109 developmental process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008309 brain mechanism Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 206010002515 Animal bite Diseases 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 238000007596 consolidation process Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004936 stimulating effect Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 229940023184 after bite Drugs 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B17/00—Teaching reading
- G09B17/04—Teaching reading for increasing the rate of reading; Reading rate control
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
Definitions
- the present invention relates to a screening technique that intentionally stimulates the brain structure including the optic nerve by changing the English graphic pattern as one of the scientific English teaching method.
- Sequential, non-specific, and infrequently used sentences with abstract meanings are hardly said to be fully translated yet, but this is not a problem with current translation programs, but rather a language database embedded in modern-day search robots or translation robots and the artificial ones that acquire them. It's just a matter of intelligence (sometimes economic or policy). For example, even now, even if it is necessary to supplement human-dependent subsidiary work for smooth translation in a specific field, using an improved version of the automatic translator by storing and acquiring additional language data in the relevant field can be complicated and professional in that field. Sentences with content can also be translated smoothly and well.
- a brain mechanism in which the person whose native language is the native language reads and interprets the text and a brain mechanism in which the person reads and interprets the language in a foreign language whose language is the second foreign language Is different.
- the present invention is the Universal Grammar's three laws merge (consolidation of divergent contents), piping (connection of causal elements) and moving (subject to the subject) that still remain in theory in linguistics.
- the speed of sentence structure under Universal Grammar is equivalent to that of the mother tongue. And the actual bodily phenomena so that educators can experience.
- the present invention inputs to a computer to implement a method of materializing a merge, piping, and moving structure of a specific sentence on a computer screen into a graphic graphic model having a dynamic image by a computer-based display device and actually driving it on the screen.
- the shapes of the characters captured in the predetermined sentence specific area are converted into individual language elements that can be moved using the sentence recognition program introduced in (Background 2.), and merged, piping, and moving the individual language elements.
- the motion according to the principle is transformed into dynamic sentence data converted into geometric plane symbols or physical stereoscopic motion information using the character animating tool introduced in (Background 3.).
- the dynamic sentence data is displayed on the screen by changing the shape and movement of each individual language element based on a meaningful language element in the predetermined sentence determined by the sentence recognition unit, that is, a back bone determining the overall meaning of the sentence. Is output.
- the present invention relates to a technology for giving a substantial graphic effect on sentences recognized by the artificial intelligence of computers, which is disclosed so far, and thus the accuracy of the judgment (in fact, this is a part of opinion divergence among linguists Is not infringing or damaging the inventive dynamic graphics conversion concept itself.
- the optic nerve and the brain connected thereto are consequently also Universal Grammar. It provides the best training tool to move the sentence reading principle.
- the graphic screening method according to the present invention When the graphic screening method according to the present invention is implemented with an appropriate text input device and a display device interlocked with a computer equipped with a language dictionary, the user may display about 10 weeks / 2000 pages of screens, and the user's reading speed is 300% on average. On the other hand, comprehension has an average effect of more than 30%. This is more than three times better than direct reading / sequential translation, a traditional form of speed reading by books.
- 1 is a flow chart showing the process of each execution step of the dynamic English graphic screening method of the present invention.
- 3 is an embodiment to which the present invention is applied to a sentence interpreted only as a concept of piping.
- 5 and 6 are embodiments in which the present invention is applied to a sentence in which a moving concept is added to a merge and piping concept.
- 6 and 7 is a survey table measuring the academic achievement after a period of time by applying the present invention screening method to the subjects.
- FIG. 1 Expressing the technical features of the present invention with reference to FIG. 1 as a process based on time within a computer
- step A the sentence data derived through step A is reclassified according to three main sentence assembly steps according to Universal Grammar, and the dynamic motion information is allocated to each language element region 1 and each blank region 2 according to the Universal Grammar.
- step (B-1) the allocated dynamic motion information is replaced with the new static graphic information modified by inserting a predetermined symbol into the original static graphic information retained by each area and displayed on the screen, or the area retained by each area.
- step C the step of moving the reference point of the sentence (step C) is repeated so that steps A-1) to (B-2) can be repeatedly applied. do.
- the sentence input to the computer is presented to the user first like a book spread out on the screen, and unlike the conventional computer book screen, the assembly structure of the sentence according to the Universal Grammar is dynamically displayed on the sentence. Therefore, the user develops a gaze that systematically follows the individual language elements of the sentence and its associative structure while the user reads it repeatedly.
- the concept of merge means that phrases that can fall apart are merged.
- FIG. 2 is an embodiment in which the screening method of the present invention is applied to a sentence once interpreted only as a concept of merge.
- a given sentence is recognized and interpreted by the sentence recognition processor of the computer, and then classified into meaningful individual language elements (1) and meaningful spaces (2), and then each language element (1) and
- the original static graphic information held by the space (2) is stored in a separate repository assigned to that area.
- the static graphic information will be the contents, size, shape, background size and color of the characters displayed on the screen.
- dynamic graphic conversion may be performed such that the original letters of the two language elements 101 merged as Type2 become lighter, and new letters protrude and approach each other.
- Type 1 and 2 of FIG. 2 correspond to step (B-2-2) in the flowchart of FIG. 1. That is, in order to faithfully reproduce the operation structure of merge, dynamic graphic transformation (B-2-2) in which meaningful individual language elements (1) come closer to each other while being protruded, reduced or deleted than overlapping static graphic transformation (B-2-1) This can be effective.
- 3 is an embodiment in which the screening method of the present invention is applied to a sentence that can only be interpreted as a concept of piping.
- Modern computer automatic translation programs actually store a very large number of different sentences.
- the number of sentences referenced in the dictionary in near real time is enormous, where the meaningless language elements 1 'and the meaningless spaces 2' are appropriately referenced from the dictionary to provide meaningful individual language. It can be filtered with a meaningful space (2) separating element (1) from (1). That is, the criterion for dividing meaningful language elements / spaces and meaningless language elements / spaces is basically determined by the similarity between the phrase or sentence referred to in the dictionary and the currently input sentence. Therefore, the linguistic elements that are meaningful in a certain sentence may be judged to be meaningless in other sentences, and the same may be true of the spaces surrounding them.
- the method of screening the converted graphic following the dynamic graphic conversion of the present invention is the most efficient visual and brain for the educator at the level of accuracy that is equal to the sentence recognition accuracy of the current automatic sentence recognition technology showing the commercially accurate accuracy. To provide a stimulus method.
- the present invention before applying the present invention to the trainee, if the language dictionary to be referred to by the AI of the system is properly and properly optimized in advance in the field of the sentence to be input, the sentence structure with higher similarity is referred to faster and more, and the correct sentence structure accordingly is obtained. Dynamic graphics can be transformed and screened to trainees. However, even if there is no such preliminary work, the screening method or the screened image of the present invention does not diminish the intrinsic effect of intentionally and quickly stimulating the response structure of the trainee's view and the brain connected thereto.
- the dynamic graphical transformation of FIG. 3 shows a static overlapping animating pattern in which an arrow 200, which implies causality, faithfully reflecting the basic concept of piping, overlaps the original sentence.
- FIGS. 2 and 3 are properly mixed.
- step A of recognizing the sentence the computer decides the meaningful individual language elements and judges the left and right spaces surrounding them again by distinguishing them with meaningful 2-merge or meaningful piping-space (2-piping).
- Dog that bites the chain loosed get free.
- Type1 animating pulls Dog and bite into each other and then displays arrows from bite to free.
- dog and bites were merged with each other, and words after bite are causally piping.
- Type 2 is proposed for faster gaze processing and more dynamic sentence movement.
- various animated graphic shapes such as protruding from the surface, hitting a word behind, or fading to a light green or yellow place where the existing word was located, basically retain the corresponding text area instead of the text data. This is due to the conversion of scanned picture information.
- FIG. 5 is a diagram illustrating a dynamic graphic transformation of the present invention in a sentence mixed from a concept of merge and piping to a concept of moving.
- Moving can be conceptually defined as meanings connected to the subject are interlocked with each other as if connected by chains.
- moving can be applied at the same time as the merge or before the merge, and it can be converted into a dynamic graphic on the screen as if the actor walked on the sentence.
- Type 1 Animating (FIG. 5) shows a dynamic graphic statement consisting of overlapping relatively static graphics. As described above, each of the five sentences is a screening state (still image) at a specific point in time. That is, if the sentence is continuously animated on the displayed screen, a dynamic graphic converted sentence video may be implemented.
- Type2 Animating sequentially lists still images of a specific point in time of a dynamic graphic transformed image rather than static overlapping graphics. For example, in their second and fifth sentences, They's movements show a connection of meaning as if they were floating slightly above the sentence.
- This graphic conversion pattern is a unique feature of the present invention, which is difficult to obtain a motif from any linguistic material including Universal Grammar, and can be optimally driven on a computer-based display screen that can be animated by the method shown in FIG.
- Empirical rules equivalent to the reader provide more stimulating, intentional, and optimized visual recognition training methods than any other English textbook.
- FIG. 6 is a diagram illustrating an advanced dynamic graphic image for recognizing language elements and fast visual sentence structure decomposition required for long text reading.
- the developed sentence recognition program recognizes sentences close to independent meanings of three to six words or more as individual language elements, and the dynamic graphic conversion program increases speed. By quickly animating with simple protrusion, attraction, and continuous movement of subjects, the learner learns rapid recognition, sentence separation, and reassembly.
- 6 and 7 show graphically the data of measuring the learning achievement by sequentially providing the above-described dynamic graphic images to the trainees. Looking at the graphs shown, there is only a starting point difference according to the difference in ability possessed by the individual, and since the introduction of moving training through merge and piping training, most learners rapidly increase the reading speed without congestion. have. This is 200-400 words per minute (more than one page per minute), surpassing those of native English speakers and graduate-level knowledge in a short period of time (less than seven weeks) that can never be achieved by conventional extensive reading methods. The results show that the screening method can accurately show that the screening method can optimally stimulate the brains of non-English speaking people.
- the present invention can induce sufficient and accurate brain stimulation even by the current automatic translation AI which is not yet complete, but further refinement and optimization when humans modify and supplement the dynamic sentence data determined by generating patterns in advance in advance. Brain training is now possible. This can be developed into a new English teaching business model.
- the optimized graphic progress speed may be additionally applied as an advanced form of the present invention, which is optimized for the direction of learning or the learning speed in the entire learning process from the basic to the final stage for non-English speaking users who are new to the English application. It is an effective tool to control.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Machine Translation (AREA)
- Processing Or Creating Images (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Description
Claims (9)
- Universal Grammar에 따른 merge작용 또는 piping작용의 실체적 상영방법에 있어서,상기 merge작용 또는 piping작용을 유발하는 원래의 개별언어요소(1)의 색깔이 옅어지거나 크기가 작아지고,상기 개별언어요소(1)에 인접한 공백(2)의 크기가 원래 크기 이하이거나 삭제됨으로써 상기 개별언어요소(1)와 상기 공백(2)을 포함하는 merge문장 또는 piping문장으로 동적 그래픽 변환되는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- 제1항에 있어서,상기 개별언어요소(1)는 상기 개별언어요소(1)가 갖는 내용과 동일한 내용으로 돌출 강조된 형태의 문장이 추가로 겹쳐지거나 재형성되어 상기 merge 작용 또는 piping 작용의 방향으로 이동하는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- 제1항에 있어서,상기 개별언어요소(1)의 화면표시 영역에는 상기 merge 작용 또는 piping작용을 의미하는 기하학적 기호가 겹쳐져서 형성되는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- Universal Grammar에 따른 moving 작용의 실체적 상영방법에 있어서,상기 moving작용을 유발하는 제1개별언어요소로부터,상기 제1개별언어요소와 상기 moving작용으로 연동되는 제2개별언어요소 사이의 범위 내 소정의 위치에 상기 제1개별언어요소 또는 제2개별언어요소가 재 출현하는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- 제4항에 있어서,상기 제1개별언어요소와 moving작용으로 연동되는 제2개별언어요소 사이의 범위 내 화면표시 영역에는 상기 제1개별언어요소 또는 제2개별언어요소가 추가로 형성되어 연속적으로 이동하면서 화면 표시되는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- 제1항 내지 제5항 중 어느 한 항에 있어서,상기 제1항 내지 제5항 중 어느 한 항에 기재된 실행단계를 B단계로 정의할 때,상기 B단계 이전에, 입력되는 문장을 공백과 개별 언어요소로 변환하는 (A-1)단계와, 상기 변환된 공백과 개별 언어요소를 인식하여 이를 언어데이터 저장부와 대조하여 의미 있는 언어요소영역(1)과 그를 둘러싼 공백영역(2)으로 분류하여 저장하는 (A-2)단계로 구성되는 문장 인식단계(=A단계)가 실행되고,상기 B단계 이후에 앞 문장의 인식정보를 삭제하고 동적 그래픽변환을 중지하는(C-1)단계와, 이어지는 문장 또는 줄 바꿔서 이어지는 문장에 상기 (A 단계)를 반복 적용할 수 있도록 인식영역과 화면표시영역의 기준점을 다음 문장의 첫머리로 이동시키는(C-2)단계로 구성되는 문장 기준위치 이동단계(=C단계)를 더 포함하여 구성되는 동적 영문그래픽 상영방법.
- 제6항에 있어서,상기 B단계는, 상기 A단계를 거쳐 도출된 문장데이터를 Universal Grammar에 따른 3가지 주요 문장 조립단계에 맞추어 재 분류하고 그에 따른 동적 운동정보를 각 언어요소영역(1)과 각 공백영역(2)에 할당하는 (B-1)단계와, 상기 할당된 동적 운동정보를 각 영역이 보유한 원래의 정적 그래픽정보에 소정의 기호를 삽입하여 변형시킨 새로운 정적 그래픽정보로 바꾸어서 화면에 표시하거나, 또는 각 영역이 보유한 정적 그래픽정보를 기초로 하여 그 자체를 변형시킨 새로운 동적 그래픽정보로 바꾸어서 화면에 표시하는 (B-2)단계로 구성되는 것을 특징으로 하는 동적 영문그래픽 상영방법.
- 제6항에 기재된 방법을 컴퓨터 기반 디스플레이 장치에 적용하여 표현된 영상에 있어서,특정 시점이나 특정 표현상태에서 정지된 상태를 저장 가능한 파일형식으로 변환한 것을 특징으로 하는 동적 영문그래픽 정지영상.
- 제7항에 기재된 방법을 컴퓨터 기반 디스플레이 장치에 적용하여 표현된 영상에 있어서,빠르거나 느리게 연속 변환되는 애니매이팅 화면을 재생 가능한 매체로 변환한 것을 특징으로 하는 동적 영문그래픽 동영상.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1122202.3A GB2488005A (en) | 2009-12-09 | 2010-11-30 | Method for playing dynamic english graphics of english sentences for speed reading |
US13/322,515 US8310505B2 (en) | 2009-12-09 | 2010-11-30 | Method for playing dynamic english graphics of english sentences for speed reading |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2009-0121688 | 2009-12-09 | ||
KR1020090121688A KR100968364B1 (ko) | 2009-12-09 | 2009-12-09 | 영어속독을 위한 동적 영문그래픽 상영방법 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2011071267A2 true WO2011071267A2 (ko) | 2011-06-16 |
WO2011071267A9 WO2011071267A9 (ko) | 2011-08-04 |
WO2011071267A3 WO2011071267A3 (ko) | 2011-09-15 |
Family
ID=42645248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2010/008503 WO2011071267A2 (ko) | 2009-12-09 | 2010-11-30 | 영어속독을 위한 동적 영문그래픽 상영방법 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8310505B2 (ko) |
KR (1) | KR100968364B1 (ko) |
GB (1) | GB2488005A (ko) |
WO (1) | WO2011071267A2 (ko) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101609910B1 (ko) * | 2013-08-09 | 2016-04-06 | (주)엔엑스씨 | 학습 서비스를 제공하는 방법, 서버 및 시스템 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002207413A (ja) * | 2001-01-12 | 2002-07-26 | National Institute Of Advanced Industrial & Technology | 行動認識発話型語学学習装置 |
JP2004205770A (ja) * | 2002-12-25 | 2004-07-22 | Yukai Kikaku:Kk | 電子表示画面上の回遊アニメーションによる言語記憶学習システム |
JP2005338173A (ja) * | 2004-05-24 | 2005-12-08 | Advanced Telecommunication Research Institute International | 外国語読解学習支援装置 |
JP2006072281A (ja) * | 2004-09-03 | 2006-03-16 | Eye Power Sports Inc | 脳を活性化し鍛える方法と記録媒体 |
JP2006202181A (ja) * | 2005-01-24 | 2006-08-03 | Sony Corp | 画像出力方法および装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010017480A (ko) * | 1999-08-11 | 2001-03-05 | 박규진 | 학습용 디지털 캡션 출력방법 |
KR100686545B1 (ko) | 2006-09-26 | 2007-02-26 | 정준언 | 언어학습장치 |
KR20090001718A (ko) * | 2007-05-14 | 2009-01-09 | 차원이동교육도시 주식회사 | 유무선 통신망을 이용하여 속독 서비스를 제공하는 서버 및기록매체 |
KR20090096952A (ko) * | 2008-03-10 | 2009-09-15 | (주)나인앤미디어 | 인지단어 그룹을 이용한 속해 훈련 서비스 제공 방법 및시스템 |
-
2009
- 2009-12-09 KR KR1020090121688A patent/KR100968364B1/ko not_active IP Right Cessation
-
2010
- 2010-11-30 US US13/322,515 patent/US8310505B2/en not_active Expired - Fee Related
- 2010-11-30 GB GB1122202.3A patent/GB2488005A/en not_active Withdrawn
- 2010-11-30 WO PCT/KR2010/008503 patent/WO2011071267A2/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002207413A (ja) * | 2001-01-12 | 2002-07-26 | National Institute Of Advanced Industrial & Technology | 行動認識発話型語学学習装置 |
JP2004205770A (ja) * | 2002-12-25 | 2004-07-22 | Yukai Kikaku:Kk | 電子表示画面上の回遊アニメーションによる言語記憶学習システム |
JP2005338173A (ja) * | 2004-05-24 | 2005-12-08 | Advanced Telecommunication Research Institute International | 外国語読解学習支援装置 |
JP2006072281A (ja) * | 2004-09-03 | 2006-03-16 | Eye Power Sports Inc | 脳を活性化し鍛える方法と記録媒体 |
JP2006202181A (ja) * | 2005-01-24 | 2006-08-03 | Sony Corp | 画像出力方法および装置 |
Also Published As
Publication number | Publication date |
---|---|
GB2488005A (en) | 2012-08-15 |
US8310505B2 (en) | 2012-11-13 |
WO2011071267A3 (ko) | 2011-09-15 |
KR100968364B1 (ko) | 2010-07-06 |
US20120113139A1 (en) | 2012-05-10 |
GB201122202D0 (en) | 2012-02-01 |
WO2011071267A9 (ko) | 2011-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Busch | Building on heteroglossia and heterogeneity: The experience of a multilingual classroom | |
Amin et al. | Sign language gloss translation using deep learning models | |
De Martino et al. | Signing avatars: making education more inclusive | |
CN110264792B (zh) | 一种针对小学生作文智能辅导系统 | |
KR100953979B1 (ko) | 수화 학습 시스템 | |
Guimarães et al. | Sign language writing acquisition--Technology for a writing system | |
Krause et al. | Sign language in light of mathematics education | |
Jauhara et al. | Re-contextualising ‘Greeting’: A multimodal analysis in an EFL textbook | |
El Ghoul et al. | Multimedia Courses Generator for Deaf Children. | |
WO2011071267A2 (ko) | 영어속독을 위한 동적 영문그래픽 상영방법 | |
Alias et al. | Principles and elements of interactive multimedia teaching aids design for hearing-impaired students | |
Zafar et al. | Teaching English and Science to the Deaf Students | |
Cohn | Interdisciplinary approaches to visual narrative | |
Sotelo | Using a multimedia corpus of subtitles in translation training | |
CN111580684A (zh) | 基于Web技术实现多学科智能键盘的方法、存储介质 | |
Probierz et al. | Sign language interpreting-relationships between research in different areas-overview | |
Fernandes et al. | Looking for the linguistic knowledge behind the curtains of contemporary dance: the case of Rui Horta's Creative Process | |
Premarathne | An overview on accessibility to the graphical study materials for visually impaired or blind students | |
Rajah et al. | UNDERSTANDING THE LITERARY ELEMENT ‘CHARACTER’IN MALAYSIAN PICTURE BOOKS: A MULTIMODAL ANALYSIS | |
Akimtseva et al. | Teaching EFL Learners to Translate the British Press: Methodology of Profound Understanding | |
Xie et al. | Divide and Control: Generation of Multiple Component Comic Illustrations with Diffusion Models Based on Regression | |
Ou | Multimodal Music Teaching Mode Based on Human-computer Interaction Technology | |
Salokhitdinovna | THE STATUS OF FOREIGN LANGUAGE LEARNING BY COMPUTER AND CULTURAL NEUTRALITY OF COMPUTER ENVIRONMENTS | |
Sonawane et al. | Virtual Vision for Blinds | |
Monaco et al. | The Negotiator: Interactive Hostage-Taking Training Simulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10836163 Country of ref document: EP Kind code of ref document: A1 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10836163 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13322515 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 1122202 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20101130 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1122202.3 Country of ref document: GB |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10836163 Country of ref document: EP Kind code of ref document: A2 |