GB2416873A - Virtual signer - Google Patents
Virtual signer Download PDFInfo
- Publication number
- GB2416873A GB2416873A GB0417151A GB0417151A GB2416873A GB 2416873 A GB2416873 A GB 2416873A GB 0417151 A GB0417151 A GB 0417151A GB 0417151 A GB0417151 A GB 0417151A GB 2416873 A GB2416873 A GB 2416873A
- Authority
- GB
- United Kingdom
- Prior art keywords
- virtual
- signer
- deaf
- sign language
- language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/009—Teaching or communicating with deaf persons
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The 'Virtual Signer' will translate any written, spoken or graphical information into sign language suitable for use by individuals or groups of D/deaf<I> people via a 3D representation of a human person of any age/gender, or even a non human animated character. Audio/graphical information is converted electronically into a useable format, text can be read directly. The 'Virtual Signer' uses a database of words and phrases and the database can be in any language. The database can be added to, and extended, at any time allowing additional sign language meanings to be added to suit the individual, country or sign language used. The 'Virtual Signer' can be used for any form of communication to and from D/deaf people and removes the barriers D/deaf people face during work, leisure and education. Due to the flexibility of the 'Virtual Signer' voice communication can be converted immediately into a language understood by the D/deaf learner and signs translated into text/audio to the hearing person.
Description
DESCRIPTION
THE VIRTUAL SIGNER
Baclround The translation of speech or the written word into sign language is expensive and time consuming. A qualified signer is normally required, they must be booked in advance, are expensive and can only work for a limited number of hours. The cost of qualified signers to support individual D/deaf people within the workplace, leisure or education/training activities is not feasible and large numbers of D/deaf people are disadvantaged throughout all walks of life, in all countries and all ages. The growth of individual e-learning in particular has put D/deaf learners at a disadvantage due to the level of information communicated via audio and complicated text in relation to ability level.
The development of the 3D 'Virtual Signer' makes effective, instantaneous and relevant communication available to all D/deaf learners *respective of the* age, gender, or nationality and can be used in any situation that required D/deaf people to have equality of communication.
Problem The problems associated with communications to D/deaf people have been those of: cost - qualified signers are expensive; location - it is not easy to ensure a qualified signer is available when and where required and normally requires a great deal of planning; availability - qualified signers are in short supply and therefore expensive; short term - normally a signer will only be required to assist the D/deaf person for short periods; different languages - even sign language has its variations and several versions are available throughout the World.
Solution The 'virtual Signer' acts as an animated 3D visual 'text reader' converting words, phrases or even visual representations into sign language. It will also convert audio into sign language by using voice recognition software, either directly, or by converting audio into text, scanning the text *nto sign language and then communicating the meaning to the individual via the 3D animated figure on a computer screen or hand held device. This means that the 'Virtual Signer' can be used for 'real time' conversion, i.e. interaction between a hearing and D/deaf person as He conversation takes place. It can also be used to communicate stored information, for example, within an e-learning programme where the course information in text and audio can be communicated to a D/deaf learner as sign language via the 3D virtual signer.
The 'Virtual Signer' uses a database of key words and phrases that are matched against the text/audio to be communicated and then produces the correct signs in the right sequence. Body movements and facial expressions can also be communicated via the 3D 'Virtual Signer'. The speed, gender, age ethnicity of the animated 3D Virtual Signer' can be altered to suit individual requirements and there is also the facility to use animated figures. For example, young people may choose a figure that represents a footballer or pop star, children may choose a favourite cartoon character.
For the more mature person age, gender and ethnicity can be changed to suit the individual needs of the D/deaf person.
Due to the flexibility of the 'Virtual Signer' database of signs the information can be changed at any time. The changes made would be reflected in the 3D animated output to the D/deaf person immediately, thus reducing the cost, removing the need for a qualified signer and enabling the system to be used at times and locations to suit the individual D/deaf person What the 'Virtual Sinner' does Scenario 1: The 'Virtual Signer' can be used by a D/deaf person or group of D/deaf people within an independent learning environment using e-learning materials on CD or the Web.
Due to the multimedia nature of e-learning packages a great deal of information is communicated as audio. Although text information is normally available on screen the language used is not easy to understand and many e-learrung packages, both CD and Web based, are unusable by D/deaf learners unless a signer is available, even then the signer must also be a curriculum expert and it is becoming increasingly difficult to find qualified people.
The 'airhead Signer' can be used in any e-learning package being delivered electronically. It will convert all of the information communicated to a hearing learner to 3D sign language and deliver this to the D/deaf learner in their preferred language i.e. sign language via the 3D representation.
Scenario 2: Within 'formal learning environments the 'Virtual Signer' can be used by a tutor or teacher to communicate directly with the D/deaf learner via a microphone and voice recognition software. The voice recognition software decodes the audio into text and this is then read by the 'Virtual Signer' text reader, matched to the database of signs and phrases and then displayed onto a monitor or large screen via a digital display device. One to one or one to many communication is available via this method.
The tutor/teacher can also use electronic white boards and hand writing recognition software to display hand written notes as sign language via the 'Virtual Signer'. One of the benefits to the D/deaf person is that this information can be stored and used as revision notes in the future.
Both the above examples can also be used in conference and other public presentations, live or recorded, in order to communicate with D/deaf people.
Scenario 3: The 'Virtual Signer' can be used by any private, public or voluntary organization that needs to communicate with D/deaf people. An example would be a counter transaction in a building society, post office, or rail ticker office. The D/deaf person would be able use a touch screen, placed on the customer side of the counter, that has a full range of signs. Via the touch screen the D/deaf person would input their request and this would be translated and delivered to the assistant as either audio or text. The assistant could then either type or speak to the D/deaf person and this would be converted into sign language and displayed on the customers screen.
The above scenarios' are limited examples of how and where the 'Virtual Signer' could be used. However, the applications for the 'Virtual Signer' exist wherever communications to individuals or groups is required.
Essential features 9 Input to the 'Virtual Signer' can be audio, text, graphics or any electronic data from a computer programme The input can be real-time or pre-packaged data The 'Virtual Signer' is portable and can be used in any environment The 'Virtual Signer' can translate any language to sign language The 3D 'Virtual Signer' can be made to represent human or animated figures to make the delivery of the information more relevant to the D/deaf person The database of virtual signs can be updated as new words and phrases are introduced The 'Virtual Signer' allows real time communication between hearing and D/deaf people
Introduction to the drawing
The drawing shown in Drawing I shows the different components that make up the Virtual Signer'. Blocks I to 5 show how audio, text, pro programmed software and graphics can be inputted into the system. All input data is then converted to a standard format within block 6. Language variations are accommodated by using language translation programmed and then converted to a common format. Graphic representations are recognised by shape and then converted into key words or phrases.
Within this block key words, phrases or symbols are recognized converted to a standard set of data and matched with the signs in the 'Virtual Signer' database, block 7. Block 7 contains a database of key words and they are linked to pre- programmed, individual or sequenced series, of signs. This information is asnsferred to block 8 where the pre-defined 3D images are stored. Within block 8 different images will be stored and the user will be able to select the age, gender, ethnicity of the 3D image or even have their favourite animated character represent the final 3D image. The image can be viewed on a normal computer screen, projected image or hand held device as represented in block 9. =
Claims (1)
- CLAIMThe 'Virtual Signer' will translate any spoken or written language or written symbols into sign language suitable for D/deaf people The translation can be 'real time' or via stored information The 'Virtual Signer' can be used as a 'text reader' enabling any text to be scanned electronically, read and displayed as sign language via the virtual 3D representation Electronic text can be read directly, converted and displayed as sign language via the virtual 3D representation The 'Virtual Signer' can be used as a 'real time' translator of audio information into sign language The vocabulary of the database used can be extended at any time The 'Virtual Signer' will translate any language into the most relevant sign language available in the country it is being used and display it as a virtual 3D representation The 3D representation can be either Human, of any age, gender or ethnicity, or animated character The 'Virtual Signer' can be personalized to individual use The 'Virtual Signer' can be used in any location The 'Virtual Signer' can be used within e-learning packages to provide 'real time' conversion of learning materials into recogoised and relevant sign language and displayed as a virtual 3D representation The 'Virtual Signer' can be used in any service industry or work based location to improve communications with D/deaf learners.The 'Virtual Signer' coo be used in all leisure activities in order to improve communications, help, advice, and guidance to D/deaf people The 'Virtual Signer' allows real-time two-way communication between hearing and D/deaf people The 'Virtual Signer' communicates signs, body language and facial expression when signs are communicated to D/deaf learners -my
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0417151A GB2416873A (en) | 2004-08-02 | 2004-08-02 | Virtual signer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0417151A GB2416873A (en) | 2004-08-02 | 2004-08-02 | Virtual signer |
Publications (2)
Publication Number | Publication Date |
---|---|
GB0417151D0 GB0417151D0 (en) | 2004-09-01 |
GB2416873A true GB2416873A (en) | 2006-02-08 |
Family
ID=32947811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0417151A Withdrawn GB2416873A (en) | 2004-08-02 | 2004-08-02 | Virtual signer |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2416873A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2237243A1 (en) * | 2009-03-30 | 2010-10-06 | France Telecom | Method for contextual translation of a website into sign language and corresponding device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0658854A2 (en) * | 1993-12-16 | 1995-06-21 | Canon Kabushiki Kaisha | Method and apparatus for displaying sign language images corresponding to text or speech |
US20040034522A1 (en) * | 2002-08-14 | 2004-02-19 | Raanan Liebermann | Method and apparatus for seamless transition of voice and/or text into sign language |
-
2004
- 2004-08-02 GB GB0417151A patent/GB2416873A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0658854A2 (en) * | 1993-12-16 | 1995-06-21 | Canon Kabushiki Kaisha | Method and apparatus for displaying sign language images corresponding to text or speech |
US20040034522A1 (en) * | 2002-08-14 | 2004-02-19 | Raanan Liebermann | Method and apparatus for seamless transition of voice and/or text into sign language |
Non-Patent Citations (2)
Title |
---|
"CSLDS: Chinese Sign Langauge Dialog System", Chen et al * |
"Meet TESSA", Published by Science Museum, dated 18 July 2001 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2237243A1 (en) * | 2009-03-30 | 2010-10-06 | France Telecom | Method for contextual translation of a website into sign language and corresponding device |
Also Published As
Publication number | Publication date |
---|---|
GB0417151D0 (en) | 2004-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wouters et al. | How to optimize learning from animated models: A review of guidelines based on cognitive load | |
Yamada | The role of social presence in learner-centered communicative language learning using synchronous computer-mediated communication: Experimental study | |
Richterich | A Model for the Definition of Language Needs of Adults Learning a Modern Language. | |
Bugueño | Using TPACK to promote effective language teaching in an ESL/EFL classroom | |
Zaim | The power of multimedia to enhance learners’ language skills in multilingual class | |
Asubiojo et al. | The Role of Information and Communication Technology in Enhancing Instructional Effectiveness in Teachers’ Education in Nigeria. | |
Stinson et al. | Real-time speech-to-text services | |
Wolfram | Dialect awareness, cultural literacy, and the public interest | |
GB2416873A (en) | Virtual signer | |
วัน วิ สา ข์ หมื่น จง | THE GUIDELINES FOR ENGLISH LEARNING MEDIA DESIGN FOR COMMUNITY-BASED TOURISM (CBT) THROUGH STAKEHOLDERS’NEEDS ANALYSIS IN PHETCHABUN PROVINCE | |
Al Matalka et al. | THE EFFECT OF AN INTERACTIVE E-BOOK ON TEACHING ARABIC LANGUAGE SKILLS TO NON-NATIVE SPEAKERS | |
Agorsh | Access to assistive technology for students with visual impairment in Adidome Senior High School | |
Antonova | TRAINING THE AUGMENTED INTERPRETER TODAY | |
Salunke | Information And Communication Technology (ICT In Education) | |
ADENIRAN | This study is aimed at the development of 2D animation to improve the cognitive ability of junior secondary school students in Geometry. Four research questions were raised and answered and three research hypothesis was formulated and tested. The target population for the study consisted of all the JSS1 mathematics students in Gurara, Niger State. The sample size for the study is eighty-four (84) students which consisted of sixty-four (64) male students and twentyfour (24) female students. The sample size was ascertained using intact classes and balloting was used to assign the experimental and control groups. The instrument for data collection titled Test of Logical Thinking (TOLT) and 2D animation and Interest Checklist (2IC) were used for data collection. The instruments were validated by educational technology experts and a reliability test was carried out using the the Kuder Richardson reliability coefficient and Cronbach alpha which gave reliability index of r= 0.89 and r= 0.78 respectively. The researcher administered the instrument to both the experimental and control groups. The findings revealed that there was significant difference between the mean cognitive scores of students taught Mathematics using 2D animation package and those taught using conventional method. The fndings revealed there was significant difference between the mean retention scores of students taught Mathematics using 2D animation package and those taught using conventional method, there was no significant difference between the mean cognitive scores of male and female students taught Mathematics using 2D animation package. The findings also revealed that there was no significant difference between the mean retention scores of male and female students taught Mathematics using 2D animation package. The study made recommendations amongst others, which included that in-service training should be organized for teachers so that they learn the process of producing 2D animation package and the use of modern instructional media for effective instructional delivery. | |
CITRA | NEGOTIATING ENGLISH TEACHING STRATEGIES FOR SPECIAL NEEDS STUDENTS DURING THE COVID-19 PANDEMIC (A CASE STUDY IN: SLB PKK BANDAR LAMPUNG) | |
Buddha et al. | Technology-Assisted Language Learning Systems: A Systematic Literature Review | |
Ghalia et al. | Empowering Education for Students with Learning Difficulties and Disabilities: A Faculty and Student Perspective on the Utilization of Assistive Technology at the Academic Arab College of Education in Haifa | |
KR20210107319A (en) | On-line learning system as per learner's level | |
Ceballos et al. | Emergent Bilinguals and Multimedia Instructional Design: Applying the Science of Learning Principles to Dual Language Instruction | |
Harbaugh | Authoritative discourse in the middle school mathematics classroom: A case study | |
Wall | Using the CEF to develop English courses for adults at the University of Gloucestershire | |
Al-Tamimi et al. | About Mada | |
Ostarhild | Careers using languages | |
Касимова | INTERCULTURAL COMMUNICATION AS A GOAL OF TEACHING A FOREIGN LANGUAGE |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |