JP2008292963A - Sign language learning apparatus - Google Patents

Sign language learning apparatus Download PDF

Info

Publication number
JP2008292963A
JP2008292963A JP2007161884A JP2007161884A JP2008292963A JP 2008292963 A JP2008292963 A JP 2008292963A JP 2007161884 A JP2007161884 A JP 2007161884A JP 2007161884 A JP2007161884 A JP 2007161884A JP 2008292963 A JP2008292963 A JP 2008292963A
Authority
JP
Japan
Prior art keywords
sign language
video
function
learning device
language learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007161884A
Other languages
Japanese (ja)
Inventor
Saori Tanaka
紗織 田中
Yousuke Matsuzaka
要佐 松坂
Kuniaki Uehara
邦昭 上原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2007161884A priority Critical patent/JP2008292963A/en
Publication of JP2008292963A publication Critical patent/JP2008292963A/en
Pending legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide a sign language learning apparatus capable of effectively learning sign language on the Internet in order to support the learner of sign language to learn reading of sign language words in a natural conversation. <P>SOLUTION: The sign language learning apparatus includes a database 115 on a server. The server outputs, through the Internet 108 and through an output controller of a learner 101, a form 107 on a browser 105. An input controller 104 transmits input information to the server 109, and the server determines whether the input is correct and creates hints. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

本考案は、インターネット上での手話学習に使用する手話学習装置に関する。  The present invention relates to a sign language learning apparatus used for sign language learning on the Internet.

手話は、手指・体・表情などの複雑な組み合わせにより表現される、視覚言語である。自然な会話の中で手話が表現される場合、前後の手話単語との関係によって、個々の単語がなめらかに表出される。このため、手話学習者が、自然な会話の中で表現されたすべての手話単語を読み取るのは非常に難しいといわれている。  Sign language is a visual language expressed by a complex combination of fingers, body and facial expressions. When sign language is expressed in a natural conversation, each word is expressed smoothly according to the relationship with the preceding and following sign language words. For this reason, it is said that it is very difficult for a sign language learner to read all sign language words expressed in a natural conversation.

従来の手話学習装置は、連続的な手話ではなく、手話単語の学習を支援するものであった(「手話学習支援装置および手話学習支援プログラムを記録した記録媒体」特願平10−255170、「手話学習装置」特願平4−235627)。  The conventional sign language learning device supports the learning of sign language words instead of continuous sign language ("Recording medium recording sign language learning support device and sign language learning support program", Japanese Patent Application No. 10-255170, " Sign language learning device "Japanese Patent Application No. 4-235627).

このため、手話学習者が自然な会話における手話を学習するには、手話が日常的に用いられているコミュニティに属して、生きた手話表現を直接学び取る以外に方法がなく、そのような機会に恵まれない多くの学習者は、自然な手話言語の獲得が難しい。  For this reason, there is no way for sign language learners to learn sign language in natural conversations, except in the community where sign language is used on a daily basis. Many underprivileged learners have difficulty acquiring a natural sign language.

そこで、手話単語が自然な会話の中で連続的に表現されたときに、どのような形態になるのかを確認しながら学習できるように、自然手話の映像を単語ごとに区切る分節課題と、自然手話の映像とそれに対応する個々の単語を並び替える配置課題を提示する手話学習装置を考案した。  Therefore, a segmentation task that separates the natural sign language video into words, and a natural task so that you can learn while checking what form the sign language word will appear when expressed in a natural conversation. We have devised a sign language learning device that presents an arrangement task for rearranging the video of sign language and the corresponding individual words.

本発明によれば、自然会話の中での連続的な手話表現の中で、学習者が各単語を読み取る能力が向上する。また、インターネットにつながる環境であれば、どこでも学習ができるため、学習の効率性が向上する。  According to the present invention, the ability of the learner to read each word is improved in continuous sign language expression in natural conversation. In addition, learning efficiency is improved because learning can be performed anywhere in an environment connected to the Internet.

発明を実施の形態Embodiments of the Invention

以下、図面を参照しながら本発明の実施の形態について説明する。図1は本発明の手話学習装置を示すシステム構成図である。図1に示す手話学習装置は、学習者101側の装置として、キーボード102、マウス103、入力制御装置104、ブラウザ105、出力制御装置106、フォーム107を有している。  Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a system configuration diagram showing a sign language learning apparatus of the present invention. The sign language learning device shown in FIG. 1 includes a keyboard 102, a mouse 103, an input control device 104, a browser 105, an output control device 106, and a form 107 as devices on the learner 101 side.

さらに、学習者101は、インターネット網108、サーバ109を通して、データベース115と通信できるようになっている。データベース115は、分節課題を提示する分節フォーム情報110、配置課題を提示する配置フォーム情報111、不正解回答が入力された際に提示されるヒント情報112、正解情報113、学習者101の学習履歴を保存する学習履歴情報114を有する。  Further, the learner 101 can communicate with the database 115 through the Internet network 108 and the server 109. The database 115 includes segment form information 110 for presenting segment tasks, placement form information 111 for presenting placement tasks, hint information 112 presented when an incorrect answer is input, correct answer information 113, and learning history of the learner 101. Has learning history information 114 stored therein.

図2は、本発明の実施の形態の手話学習装置における入出力の関係を示す説明図である。サーバ109がフォームの要求をすると、データベースからフォーム情報117が送られ、サーバ上でフォーム生成が行われ118、フォームの出力信号が出力制御装置106に送られ、ブラウザ105上にフォーム情報107が出力される120。学習者が入力した情報121により、サーバがデータベースに正解情報要求122を行い、データベースが正解情報123をサーバに送り、サーバが正解判定124を行う。その結果、学習者のブラウザ105上に正解やヒントの出力がされ125、同時に学習履歴がデータベースに記録される126。  FIG. 2 is an explanatory diagram showing an input / output relationship in the sign language learning apparatus according to the embodiment of the present invention. When the server 109 requests a form, the form information 117 is sent from the database, the form is generated on the server 118, the form output signal is sent to the output control device 106, and the form information 107 is output on the browser 105. 120. Based on the information 121 input by the learner, the server makes a correct information request 122 to the database, the database sends the correct information 123 to the server, and the server makes the correct answer determination 124. As a result, correct answers and hints are output 125 on the learner's browser 105, and at the same time, the learning history is recorded 126 in the database.

ここで、図3を用いて、ヒント生成のプロセスについて説明する。学習者の入力127が正解128だった場合は次の問題131へ画面が切り替わる。学習者の入力127が不正解でかつ、試行数が5回以下の場合132において、ヒントが出力され134、学習者は再試行が求められる。しかし、試行数が5回以上になったところで、正解を示す画面が出力され130、次の問題131へ画面が切り替わる。  Here, the process of generating a hint will be described with reference to FIG. If the learner's input 127 is correct 128, the screen switches to the next question 131. If the learner's input 127 is incorrect and the number of trials is five or less, a hint is output 134 and the learner is asked to retry. However, when the number of trials reaches 5 or more, a screen showing the correct answer is output 130, and the screen switches to the next question 131.

次に、図4を用いて、分節課題を提示する手話映像分節フォームと、配置課題を提示する手話映像配置フォームの切り替え方法について説明する。サーバからフォームの要求があった際に135、その要求が手話映像分節フォームの要求であった場合136、データベースは、手話映像分節フォームをサーバに送り137、その要求が手話映像分節フォームの要求でなかった場合は、手話映像配置フォームをサーバに送る。  Next, a method for switching between a sign language video segment form for presenting a segment task and a sign language video layout form for presenting a layout task will be described with reference to FIG. When the server requests a form 135, if the request is a request for a sign language video segment form 136, the database sends a sign language video segment form to the server 137, and the request is a request for a sign language video segment form. If not, a sign language video layout form is sent to the server.

次に、図5を用いて、分節課題における手話映像分節フォームのレイアウトを説明する。手話映像分節フォーム139は、課題映像である自然手話映像140が画面上部に提示され、学習者がスライダ141上のノブ142を操作しながら、自然手話映像の中の単語の切れ目を選択することができる。自然手話映像が2単語からなる場合、タブ142を左右に動かすことで、単語1の始まりのフレーム数143と終わりのフレーム数144が表示される。同様に、単語2の始まりのフレーム図145と終わりのフレーム数146も表示される。学習者が単語の切れ目が選択できたら、次へ147ボタンを押すことで、正解の判定が行われる。  Next, the sign language video segment form layout in the segment task will be described with reference to FIG. In the sign language image segment form 139, a natural sign language image 140, which is a task image, is presented at the top of the screen, and a learner selects a word break in the natural sign language image while operating the knob 142 on the slider 141. it can. When the natural sign language video is composed of two words, the number of frames 143 at the beginning and the number of frames 144 at the beginning of the word 1 are displayed by moving the tab 142 left and right. Similarly, the start frame diagram 145 and the end frame number 146 of the word 2 are also displayed. When the learner can select a word break, the correct answer is determined by pressing the next 147 button.

次に図6を用いて、配置課題における手話映像配置フォームのレイアウトを説明する。手話映像配置フォーム149は、課題映像である自然手話映像150が画面上部に提示され、その下に、単語映像151が提示される。学習者が入れ替えボタン152を用いることで、単語の順番が入れ替わり、学習者が自然手話映像150の中で表現されている単語の順番に単語映像151を並び替えることができたら、次へ158ボタンを押すことで、正解の判定が行われる。  Next, the sign language video layout form layout in the layout task will be described with reference to FIG. In the sign language video layout form 149, a natural sign language video 150 that is a task video is presented at the top of the screen, and a word video 151 is presented below it. When the learner uses the exchange button 152, the order of the words is changed, and when the learner can rearrange the word video 151 in the order of the words expressed in the natural sign language video 150, the next 158 button The correct answer is determined by pressing.

このような動作により、学習者は連続的に単語が表現される自然手話映像の中で、単語のつながり方を意識的に学習することが可能になる。  Such an operation enables the learner to consciously learn how to connect words in natural sign language images in which words are continuously expressed.

本発明の実施の形態の手話学習装置を示すシステム構成図である。1 is a system configuration diagram illustrating a sign language learning device according to an embodiment of the present invention. 本発明の実施の形態の手話学習装置における入出力の関係を示す説明図である。It is explanatory drawing which shows the relationship of the input / output in the sign language learning apparatus of embodiment of this invention. 本発明の実施の形態の手話学習装置におけるヒント生成のフローチャートである。It is a flowchart of the hint production | generation in the sign language learning apparatus of embodiment of this invention. 本発明の実施の形態の手話学習装置におけるフォーム生成のフローチャートでる。It is a flowchart of form production | generation in the sign language learning apparatus of embodiment of this invention. 本発明の実施の形態の手話学習装置における手話映像分節システムのフォームを示す説明図である。It is explanatory drawing which shows the form of the sign language video segmentation system in the sign language learning apparatus of embodiment of this invention. 本発明の実施の形態の手話学習装置における手話映像配置システムのフォームを示す説明図である。It is explanatory drawing which shows the form of the sign language image | video arrangement | positioning system in the sign language learning apparatus of embodiment of this invention.

符号の説明Explanation of symbols

101 学習者
102 キーボード
103 マウス
104 入力制御装置
105 ブラウザ
106 出力制御装置
107 フォーム
108 インターネット網
109 サーバ
110 分節フォーム情報
111 配置フォーム情報
112 ヒント情報
113 正解情報
114 学習履歴情報
115 データベース
DESCRIPTION OF SYMBOLS 101 Learner 102 Keyboard 103 Mouse 104 Input control device 105 Browser 106 Output control device 107 Form 108 Internet network 109 Server 110 Segment form information 111 Arrangement form information 112 Hint information 113 Correct information 114 Learning history information 115 Database

Claims (12)

単語位置の切れ目に対応するスライダ上のノブ位置の入力を受けて正解位置の判定をする手話映像分節システムと、課題映像を提示し、課題映像の順番に対応する単語映像の並び替順の入力を受けて正解の判定をする手話映像配置システムを合わせもつ手話学習装置。A sign language video segmentation system that receives the input of the knob position on the slider corresponding to the break of the word position and determines the correct position, presents the task video, and inputs the rearrangement order of the word video corresponding to the order of the task video Sign language learning device that also has a sign language video layout system that determines the correct answer. 手話映像分節システムは、マウス操作によってノブの位置を操作する機能をもつ手話学習装置。The sign language video segmentation system is a sign language learning device that has the function of operating the position of the knob by operating the mouse. 手話映像分節システムは、2.の機能に加えて、スライダ上を移動するマウス操作に同期して、その時刻の映像を表示させる機能をもつ手話学習装置。The sign language video segmentation system is In addition to the above function, a sign language learning device having a function of displaying an image at that time in synchronization with a mouse operation moving on a slider. 手話映像分節システムは、2.および3.の機能に加えて、入力されたノブの位置を記憶されたノブの位置と比較する機能を持つ手話学習装置。The sign language video segmentation system is And 3. In addition to the above function, a sign language learning device having a function of comparing the input knob position with the stored knob position. 手話映像分節システムは、2.および3.および4.の機能に加えて、入力されたノブの位置と記録されたノブの位置の比較の結果によって異なる操作を行う機能を持つ手話学習装置。The sign language video segmentation system is And 3. And 4. In addition to the above function, a sign language learning device having a function of performing different operations depending on the result of comparison between the input knob position and the recorded knob position. 手話映像分節システムは、2.および3.および4.および5.の機能に加えて、映像およびノブの位置およびその後の操作に関する情報をサーバから読み出す機能を持つ手話学習装置。The sign language video segmentation system is And 3. And 4. And 5. In addition to the above functions, a sign language learning device having a function of reading out information on the position of the video and knob and subsequent operations from the server. 手話映像配置システムは、手本映像の提示と同時に、手本の中で表現された複数の個別単語映像を提示する機能を持ち、個別単語映像の並び順を操作する機能を持つ手話学習装置。The sign language video placement system is a sign language learning device having a function of presenting a plurality of individual word videos expressed in a model simultaneously with the presentation of a model video and a function of manipulating the arrangement order of the individual word videos. 手話映像配置システムは、7.の機能に加えて、入力された個別単語映像の並び順を、記録された個別映像の並び順と比較する機能を持つ手話学習装置。The sign language video placement system is In addition to the above function, a sign language learning device having a function of comparing the order of input individual word videos with the order of recorded individual videos. 手話映像配置システムは、7.および8の機能に加えて、入力された個別映像の並び順と記録された個別映像の並び順の結果によって異なる操作を行う機能を持つ手話学習装置。The sign language video placement system is In addition to the functions of (8) and (8), a sign language learning apparatus having a function of performing different operations depending on the result of the arrangement order of the input individual videos and the arrangement order of the recorded individual videos. 手話映像配置システムは、7.および8.および9.の機能に加えて、個別映像の並び順およびその後の操作に関する情報をサーバから読み出す機能を持つ手話学習装置。The sign language video placement system is And 8. And 9. In addition to the above functions, a sign language learning device having a function of reading out information about the order of individual videos and subsequent operations from the server. 4.の映像提示システムに情報を送信するサーバ。4). A server that sends information to the video presentation system. 記録システムは、5.のサーバに個別映像の並び順およびその後の操作に関する情報を記録する手話学習装置。The recording system is 5. A sign language learning device that records information related to the order of individual videos and subsequent operations on the server.
JP2007161884A 2007-05-23 2007-05-23 Sign language learning apparatus Pending JP2008292963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007161884A JP2008292963A (en) 2007-05-23 2007-05-23 Sign language learning apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007161884A JP2008292963A (en) 2007-05-23 2007-05-23 Sign language learning apparatus

Publications (1)

Publication Number Publication Date
JP2008292963A true JP2008292963A (en) 2008-12-04

Family

ID=40167703

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007161884A Pending JP2008292963A (en) 2007-05-23 2007-05-23 Sign language learning apparatus

Country Status (1)

Country Link
JP (1) JP2008292963A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191931A (en) * 2018-10-12 2019-01-11 曹军 A kind of English learning machine taught through lively activities
JP2020126144A (en) * 2019-02-05 2020-08-20 ソフトバンク株式会社 System, server device, and program
KR102169335B1 (en) * 2020-03-03 2020-10-26 한국건설기술연구원 Smart safety bus station

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191931A (en) * 2018-10-12 2019-01-11 曹军 A kind of English learning machine taught through lively activities
JP2020126144A (en) * 2019-02-05 2020-08-20 ソフトバンク株式会社 System, server device, and program
KR102169335B1 (en) * 2020-03-03 2020-10-26 한국건설기술연구원 Smart safety bus station

Similar Documents

Publication Publication Date Title
Wouters et al. How to optimize learning from animated models: A review of guidelines based on cognitive load
US11871109B2 (en) Interactive application adapted for use by multiple users via a distributed computer-based system
US8714982B2 (en) System and method for teaching social skills, social thinking, and social awareness
Guichon et al. The impact of the webcam on an online L2 interaction
CN110568984A (en) Online teaching method and device, storage medium and electronic equipment
US20160005326A1 (en) Adaptive, immersive, and emotion-driven interactive media system
US10649612B2 (en) Textual content speed player
US9805482B2 (en) Computer-implemented tutorial for visual manipulation software
Jeon et al. Beyond ChatGPT: A conceptual framework and systematic review of speech-recognition chatbots for language learning
Mayer Designing multimedia instruction in anatomy: An evidence‐based approach
Khacharem et al. Expertise reversal for different forms of instructional designs in dynamic visual representations
Fyfield et al. Improving instructional video design: A systematic review
JP2003295754A (en) Sign language teaching system and program for realizing the system
JP2004077756A (en) System and method for learning language through role-playing game
JP2008292963A (en) Sign language learning apparatus
US20150199171A1 (en) Handwritten document processing apparatus and method
WO2020026754A1 (en) Information processing device, information processing method, and program
JP7276334B2 (en) Information processing device, information processing method, and program
JP6980883B1 (en) Assist system, assist method, and assist program
JP3767581B2 (en) Typing skill acquisition support device, word learning acquisition support device, server terminal, and program
Divekar AI enabled foreign language immersion: Technology and method to acquire foreign languages with AI in immersive virtual worlds
JP2017220039A (en) Program generation device and program
Kacorri et al. Evaluating a dynamic time warping based scoring algorithm for facial expressions in ASL animations
Lohse et al. Enabling robots to make use of the structure of human actions-a user study employing Acoustic Packaging
Fan et al. Improving the Accessibility of Screen-Shared Presentations by Enabling Concurrent Exploration