WO2016030848A2 - Method and system for augmented reality based learning, monitoring and evaluation - Google Patents

Method and system for augmented reality based learning, monitoring and evaluation Download PDF

Info

Publication number
WO2016030848A2
WO2016030848A2 PCT/IB2015/056495 IB2015056495W WO2016030848A2 WO 2016030848 A2 WO2016030848 A2 WO 2016030848A2 IB 2015056495 W IB2015056495 W IB 2015056495W WO 2016030848 A2 WO2016030848 A2 WO 2016030848A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
execution
assignment
providing
activity
Prior art date
Application number
PCT/IB2015/056495
Other languages
French (fr)
Other versions
WO2016030848A3 (en
Inventor
Uday MEHTA
Original Assignee
Mehta Uday
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mehta Uday filed Critical Mehta Uday
Publication of WO2016030848A2 publication Critical patent/WO2016030848A2/en
Publication of WO2016030848A3 publication Critical patent/WO2016030848A3/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers

Definitions

  • the present invention belongs to the field of information technology and relates generally to the constitution and implementation of cognitive audiovisual aids for augmenting as well as assessing cognitive learning among human users.
  • the present invention is an integrated augmented-reality based method and system for interactively relaying semantic indicia of information in response to multiple inputs sensed from a user and his surroundings.
  • the present invention further applies to creation of an immediate virtual / real environment for a user capable of both imparting knowledge and assessing its assimilation by said user.
  • Yet another objective of the present invention in addition to the aforementioned objective(s) that the method and system so provided is further capable of ably integrating most of the five senses of a human being to thereby result in a more robust learning experience.
  • Yet another objective of the present invention in addition to the aforementioned objective(s) that the method and system so provided is further capable of ably creating an immediate virtual as well as real environment for a user capable of both imparting knowledge and assessing its assimilation by said user.
  • Yet another objective of the present invention in addition to the aforementioned objective(s) that the method and system so provided is further capable of seamless implementation to a user base which may have geographic, time, language, familiarity with the subject, initial skill level, grasping pace, accessibility and physical ability differences.
  • Yet another objective of the present invention in addition to the aforementioned objective(s) that the method and system so provided is further capable of optimizing the content delivery depending on requirements of a user.
  • Figure 1 is a schematic to illustrate the implementation environment of the present invention.
  • Figures 2 to 1 1 are screenshots illustrating key steps in performance of the present invention.
  • the disclosures herein is directed towards providing an augmented reality- based method and system that enables a cognitive learning / instructional / assessment environment within the three-dimensional space of a user by additionally integrating subsystems for identification and tracking of peripheral objects for the purpose of retrieving and presenting situation- specific context to a human user to thereby correlate real-and-virtual world sensory perceptions resulting in a better holistic learning approach.
  • integration between real and virtual worlds is achieved by integrating three different technologies, namely, application programming with back-end database, embedded technology and animation. Integration of sensory modalities available such as light, heat, movement, RFID, sound, image recognition and so on together create interactive modules for instructional, learning, testing, commenting and/or suggestive / support purposes. Accordingly, the present invention proposes a method and system that helps to train, acclimatize and evaluate a user in response to real-world situations.
  • a fact underlying the present invention is that the use of multiple sensors not only opens new ways of interaction but also may lead to a paradigm-shift in interactions between humans, computers and artificial intelligence (Al) resources.
  • the use of multiple sensors and logic for fusion among their inputs plays an important role in order to derive semantic relations for precisely relaying accurate information content meaningful to situational needs of the said end-user.
  • integration between real and virtual worlds is achieved by combining among sensory modalities available such as light, heat, movement, RFID, sound, image recognition and so on, in a device which operates interactively in modes ranging from instructional, learning, testing, commenting and/or suggestive support to thus query for and present related information context.
  • the present invention comprises two primary modules- Learning and Evaluation of which the former is divided into further three sessions: a) group learning, where the user accesses an interactive audio-visual device for sharing basic knowledge of a subject matter
  • the evaluation module of the present invention forms a qualifying environment for the user after having passed the aforementioned sessions.
  • the system monitors the time, motion and performance for each step of a user in the background and qualifies the same as per pre-defined standards / benchmarks. Errors are logged in the system and reported along with summary of performance at end of the evaluation session.
  • Example 1 Application environment and system organization
  • Application environment of the present invention comprises a plurality of users having access to a computing terminal capable of rendering audiovisual content.
  • Computing terminal here would mean and refer personal computers,, smart phones, tablets and any compatible equipment capable of rendering audio-visual content.
  • System implementation of the present invention is intended alternatively among a web-based application, downloadable software/ app being executable on aforesaid computing terminal. Users may register (online/ offline) for use of this platform, upon which each user is given a login ID and password access to the system.
  • hosting of the system of the present invention is chosen either among local memory of the computing equipment or alternatively as a cloud-based service.
  • Learning content comprising a comprehensive library of audio-visual is hosted in this environment, and may be retrieved as per command of the user.
  • Said content may be catalogued by category, sub-category, subject, and language which define their selective retrieval depending on requirements of the user.
  • system organization of the present invention involves integration of various peripheral sensors for interfacing the real and virtual environments of the user.
  • Said peripheral sensors including those known for heat signatures, smell, RFID, Sound command, their equivalents and their combinations that may be advantageously integrated for qualitative as well as quantitative linkage of the real and virtual environments of the user.
  • the system of the present invention is implemented on a laptop and a RFID reader is connected to the USB port of the computer for allowing the user in charge of the laptop to thus have a correlative experience between real as well as virtual worlds while assimilating the selected content.
  • Example 2 User experience and learning content
  • the present invention provides for augmented reality-based learning, monitoring and evaluation session(s) to a user accessing said system.
  • Such virtues are enabled via amalgamating experiences of real world and virtual world, wherein all sensory perceptions - sight, hearing, taste, touch, and smell are channelized via the peripheral sensors into the overall learning experience had by the user in relation to the learning content selected.
  • a novel feature of the present invention is that the audio, video and text of the learning content are synchronized and programmed to render, in a predetermined manner, step-wise for example, in response to a stimulus from the user. Stimuli may include completion of a step, selection of a component and so on by the user.
  • Another novel feature of the present invention is that the user is connected to virtual world through peripheral sensors and does not necessarily have to touch the computer for proceeding with the learning experience. Stimuli would be accordingly read by the system to progress the learning session, which experience is much more robust, since the user is continually relating the virtual and real worlds, he/ she would benefit from real feel of the article or step as viewed in parallel on-screen of the computing device. This builds a stronger memory of the experience, and hence better holistic learning.
  • Example 3 Use-case - Assembly protocol for a device
  • the present invention finds applicability in all situations demanding a learning / imbibing / imitation scenario.
  • a user opts to learn a protocol for the assembly of a device
  • the system would, in response, selectively retrieve a corresponding audiovisual file from amongst the library for rendition on the user's computing terminal.
  • Said audio-visual file contains application-specific information such as building list of materials, tooling, as well as chronometric parameters for completion of assignment.
  • the user is presented with an interface screen that lists out all the components as well as tools required to be present on the table prior to starting the assembly.
  • the learning content relayed may comprise step-wise instructions / animation for assembly of the device and additional information on each component / step as may be required by the user.
  • Each constituent component(s) / step(s) is/ are qualified by tagging (such as RFID) or user input (such as pressing a key) so that the system may effectively track and proceed with rendition of the learning content as per progress of the user in real world.
  • tagging such as RFID
  • user input such as pressing a key
  • monitoring and assessment of learning are crucial features of the present invention.
  • the system shall infer accordingly from the reference qualitative/ quantitative/ chronologic metric in the audio-visual content being rendered, and accordingly shall automatically prompt the user for necessary correction.
  • Such monitoring feature is, in one embodiment, intended to be a selective feature when the system is being run in practice mode. In testing mode, however, there would be no alert/ guidance given to the user and alternatively, the system is programmed to halt should the user commit any error.
  • the system of the present invention is programmed for assessment of performance by a user, by generation of reports upon requisition or completion of the assignment.
  • Figures 2 to 1 1 are screenshots illustrating key steps in performance of the present invention, from which the logic of implementation of the present invention may be readily appreciated. Accordingly, the user is presented with a subscription / login screen/ page as shown in Figure 2 whereby the user may access by signing into the system or register as a first instance single time activity.
  • the user is taken to a second screen/ page as shown in Figure 3, whereby the user is presented with a menu to select assignment/ task/ exam to be undertaken.
  • the user is taken to another screen/ page as shown in Figure 4, whereby the user is presented with a field list for selection of seat number and language for undertaking said assignment.
  • Other data fields on this screen are populated with assignment reference details, including assignment date and status of assignment to create the virtual workshop learning environment. Completing the actions relating to this screen/ page, the user is prompted with another screen/ page as shown in Figure 4, whereby the user is presented with a bill of materials and tooling necessary as preparation for the assignment to be undertaken.
  • Type, name, specification and number of each item is listed for information of the user who may accordingly verify availability and select to continue further. Completing the actions relating to this screen/ page, the user is prompted with another screen/ page as shown in Figure 5, whereby the user is provided with a list of sequential actions to be undertaken for completion of the task undertaken.
  • a table populated with activity serial number, RFID ID / read count, description of activity to be performed, and estimated time of completion are provided each along with a link to an stored audio-visual / aid file for rendition should the user require to follow for clear understanding of the activity concerned.
  • the system monitors correct performance of the user, by that it returns a tick mark / correct symbol if the desired action and sequence are duly followed.
  • the user is prompted with a cross mark / incorrect symbol to indicate that the desired action and sequence are not duly followed. Corrective measures are presented upon the latter occurrence, to allow the user to learn the how the desired action and sequence are to be duly followed.
  • the user is prompted whether he would like to view report of the assignment undertaken as shown in Figure 9, whereby if the user so confirms, the system directs the user to a report page as shown in Figure 10, whereby the system provides analytics support to generate reports alternatively as user-wise, activity-wise and a summary to thereby present success of the practice session/ adaptive learning or exam undertaken.
  • the exam environment follows generally the same execution logic, however with the difference that no corrective measures are shown to the user upon any mistake/ error in performing the assignment.
  • the reports generated also include an chronometry assessment wherein a comparative account of actual time taken vis-a-vis prescribed estimated time for each activity is presented for qualitative assessment of learning had by the user.
  • Example 5 Use-case - Marketing
  • the present invention finds applicability in non-learning implementation environments, for example, marketing.
  • a perfume shop where all items for sale are RFID tagged
  • the system shall note a user's selection from its corresponding RFID tag, and proceed to execute a programmed function, such as displaying information on that item, or using that item on a screen, sending an SMS to a concerned party and/ or proceeding with billing for the item(s) involved.
  • a programmed function such as displaying information on that item, or using that item on a screen, sending an SMS to a concerned party and/ or proceeding with billing for the item(s) involved.
  • This way user can correlate three inputs from the virtual world, that is audio, video and text and also two inputs from the real world, that is, touch and smell, thereby making it a robust user experience.
  • Example 5 Use-case - Vocal training / training of specially-abled persons
  • the present invention finds applicability in vocal training without a human instructor/ faculty.
  • the learning content may include audio-visual and/or text content, which the user may access for comprehension.
  • Pace of information delivery may be altered by the user to suit his/ her grasping capacity or availability of time while maintaining the ease of operation and resulting in a real-time learning experience.
  • This feature is especially beneficial for specially-abled persons wherein each individual can imbibe the same content as per his/ her convenience and learning speed.
  • Localization of content including automatic translation of languages as well as inclusion of further detailed files which may be accessed only upon requirement are intended to be covered by ambit of the present invention.
  • certain sections may be skipped or conversely explained in more detail for building a through learning experience.
  • the present invention is capable of seamless implementation to a user base which may have geographic, time, language, familiarity with the subject, initial skill level, grasping pace, accessibility and physical ability differences.
  • the present invention is intended for implementation across a variety of devices capable of rendering multimedia content.
  • the following system requirements are presently prescribed for performance of the preferred embodiment of the present invention:
  • the present invention is capable of implementation either as a web-based application, stand-alone software application, mobile app or other modes of implementation without deviating from essence of performance described in the foregoing narration.

Abstract

A method and system for augmented reality based learning is disclosed which aims to provide assistance to improve both quality of training and distribution of knowledge in a distance learning situation by amalgamating most of the five sensory perceptions had by a human user thereby resulting in a holistic learning experience. Further disclosed are features of said method and system which allow monitoring and evaluation of the user accessing said system.

Description

Method and system for augmented reality based learning, monitoring and evaluation -: NON-PROVISIONAL SPECIFICATION:-
Cross references to related applications: This non-provisional application derives benefit of previously filed provisional application for patent No. 723/MUM/2014 dated 28 August 2014 the contents of which are entirely incorporated herein by reference.
Copyright notice: A portion of the disclosure of this patent document contains material subject to copyright protection. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it appears in the official Patent Office records, but otherwise reserves all copyright rights whatsoever. The following notice applies: © 2015, Uday M. Mehta. All Rights Reserved
[001 ] Field of the present invention
[002] The present invention belongs to the field of information technology and relates generally to the constitution and implementation of cognitive audiovisual aids for augmenting as well as assessing cognitive learning among human users.
[003] Particularly, the present invention is an integrated augmented-reality based method and system for interactively relaying semantic indicia of information in response to multiple inputs sensed from a user and his surroundings. The present invention further applies to creation of an immediate virtual / real environment for a user capable of both imparting knowledge and assessing its assimilation by said user.
[004] Background of the present invention
[005] Recent developments in the field of electronic learning aids have made possible the ability to package more processivity, more power and more functionality into ever smaller spaces. Combined with access to almost limitless information made available via internet or stored in suitable media, this had led to advent of specialized equipments capable of providing appropriate context specific to situation and requirements of the user in either learning or chaperoning applications. Evolution of these devices have mandated special needs of design, construction, packaging, housing, operations and maintenance whereby a further complete interactive environment is desired that has ability to integrate and heed to multiple inputs from the end-user to enable an overall learning, understanding, implementation and knowledge-sharing environment.
[006] Systematic and able tutoring is extremely important for provision of a complete experiential learning to students. Unavailability of educational and training opportunities, learning aids, inaccessibility of classrooms, able tutors, time constraints etc have been long persisting problems. Today, availability and access to trained faculty for imparting learning is becoming increasingly difficult. In contrast, access to computers, mobiles, tablets and like devices has become omnipresent thereby bringing into existence an alternate world - the digital virtual world - wherein the user has identity and sensory experiences different from those of his real world existence. In today's world, one does many things either in the virtual world through digital devices such as computers, mobiles, tablets etc. or in the real world by actual execution of actions. Either virtual or real worlds alone miss out on integration of all five sensory perceptions and hence, a truly holistic learning experience.
[007] While both the virtual and real worlds have their own vices, with the former lacking a 'live' experience, the latter is bound within rigid framework of real world material laws. It would be thus desirable to achieve sensory integration between the real and virtual worlds for charting a holistic sensory experience for a user while preserving the 'live' appreciation of any event therein. Augmented reality is one of the most promising technologies in this perspective, especially for imparting learning for users drawing benefit of the advantages of a virtually augmented learning process on their own without the physically presence of any real faculty.
[008] Conventional augmented reality applications available today provide a live view of a real-world environment whose elements may be augmented by computer-generated sensory input such as video, sound, graphics or GPS data. With such applications, a view of reality may be modified by a computing device, and they can enhance a user's perception of reality and provide more information about the user's environment. However, the application of these systems is limited to providing visual overlays in a non- interactive manner. Hence, apart from an enhanced visual experience, these methodologies rarely provide for anything else in particular, such as imparting training or conveying precise situation-oriented context, thus preserving the need as well as opportunity for improvement in the immediate technology niche presented. Also, as may come naturally while envisioning such improved systems, multi-sensor awareness and integration need major consideration before calling any system a plausible answer to the needs referred hereinabove.
[009] As made clear from the foregoing narration, it would therefore be advantageous to have some way of bridging augmented reality technologies for interactive learning and assessment processes thereof and yet being generic in applicability to a wide variety of devices without mandating high costs, technical complexities, any major structural modifications in said devices and/or being within ambit of operability of any layman.
[010] Quick progress in learning is observed when the student is interested in the subject matter, has frequent and timely interactions with an able instructor. However, this is not always possible and the initial motivation the student brings to the task is soon dissipated. Furthermore, traditional reading tutorials are generally found lacking in creativity in respect of teaching aids and thus, fail to generate interest and effective participation of the student in the learning process. Also, grasping power and other cognitive abilities differ from person to person. As a result, it is a pressing requirement that teaching methodologies be adapted to individual abilities and pace rather than having a common generalist approach. Therefore there exists a pressing need for evolution of a truly capable platform for augmented reality based learning, monitoring and evaluation which gives due consideration to said aspects.
[01 1 ] It shall be understood that the background description provided herein before is for the purpose of generally presenting the state of art in the field of the present invention and generally the needs unaddressed. Work of the presently named inventor, specifically directed against the technical problems recited hereinabove and currently part of the public domain, is neither expressly nor impliedly admitted as prior art against the present disclosure. [012] Description of related art
[013] Prior art bears scattered references to attempts for addressing the encumbrances mentioned hereinabove. For example, the closest references enlist user-cases wherein a user is expected to learn under a physical faculty and be assessed for assimilation of the learning had under supervision of said faculty. Alternatively in absence of a physical faculty and/ or supervisor, the user is expected to follow, execute and self-assess according to an electronic audio-visual discourse. Both modalities have their own drawbacks, with the former restricting the learning opportunity as per availability of the faculty and premises whereas the latter modality provides repeatability but precludes interactions critical for learning of new concepts and validation of their assimilation by the end-user. Either modality is thus unable to surely translate the knowledge content into implementable learning, which is far more important in the real world than mere consumption of learning media.
[014] Prior art, to the extent presently surveyed and discussed hereinabove, does not list a single effective solution to the encumbrances mentioned above thereby preserving the need to invent for the present inventor, who, as result of his targeted research, has come up with novel solutions for addressing at least all major needs of the art once and for all. The following brief description sets forth an illustrative yet-preferred embodiment of his present invention.
[015] Objectives of the present invention
[016] The present invention is identified in fulfillment of the following objectives, of which:
[017] It is a primary objective of the present invention to provide an integrated augmented-reality based method and system for interactively relaying semantic indicia of information in response to multiple inputs sensed from a user and his surroundings.
[018] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the method and system so provided is further capable of ably integrating most of the five senses of a human being to thereby result in a more robust learning experience. [019] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the method and system so provided is further capable of ably creating an immediate virtual as well as real environment for a user capable of both imparting knowledge and assessing its assimilation by said user.
[020] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the method and system so provided is further capable of implementation round the clock without requiring presence of human faculty.
[021 ] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the method and system so provided is further capable of seamless implementation to a user base which may have geographic, time, language, familiarity with the subject, initial skill level, grasping pace, accessibility and physical ability differences.
[022] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the method and system so provided is further capable of optimizing the content delivery depending on requirements of a user.
[023] Yet another objective of the present invention, in addition to the aforementioned objective(s), that implementation of the method and system so provided obviates the necessity of physical aggregation of users
[024] Yet another objective of the present invention, in addition to the aforementioned objective(s), that implementation of the method and system so provided obviates the mandate for pace of information dissemination being dependent on group progress. This way, persons with certain disabilities can learn at their own pace.
[025] Yet another objective of the present invention, in addition to the aforementioned objective(s), that implementation of the method and system so provided is compatible with all state-of-art computing equipment. [026] Yet another objective of the present invention, in addition to the aforementioned objective(s) that the implementation so arranged does not include undue involvement of capital expenditures and technical complexities for the end-user.
[027] These and further objectives, features and advantages of the present invention shall be apparent to the reader upon the detailed description and accompanying figures to follow, the attainment of which is particularly recited in the appended claims submitted in this document.
[028] Brief description of drawings
[029] The present invention is hereinafter explained in further detail with reference to the certain drawings, in which numerical indexing is incorporated for common reference in the narrative description. Among the drawings included:
[030] Figure 1 is a schematic to illustrate the implementation environment of the present invention.
[031 ] Figures 2 to 1 1 are screenshots illustrating key steps in performance of the present invention.
[032] Summary of the present invention
[033] The disclosures herein is directed towards providing an augmented reality- based method and system that enables a cognitive learning / instructional / assessment environment within the three-dimensional space of a user by additionally integrating subsystems for identification and tracking of peripheral objects for the purpose of retrieving and presenting situation- specific context to a human user to thereby correlate real-and-virtual world sensory perceptions resulting in a better holistic learning approach.
[034] Detailed description of the present invention
[035] Principally, general purpose of the present invention is to assess disabilities and shortcomings inherent to known systems comprising state of the art and develop novel, inventive and industrially applicable systems incorporating all available advantages of known art and none of its disadvantages. [036] The present inventor has recognized the benefits of providing an augmented reality-based system that may be made capable of automatically identifying and tracking features within a three-dimensional environment of a user for the purpose of projecting correlated information with purpose of knowledge imparting or instructional motive. Extension of the same approach to further assessing the assimilation of knowledge imparted or suggesting corrective actions in response to errors on part of the user form the underlying principles of the present invention.
[037] According to principles of the present invention, integration between real and virtual worlds is achieved by integrating three different technologies, namely, application programming with back-end database, embedded technology and animation. Integration of sensory modalities available such as light, heat, movement, RFID, sound, image recognition and so on together create interactive modules for instructional, learning, testing, commenting and/or suggestive / support purposes. Accordingly, the present invention proposes a method and system that helps to train, acclimatize and evaluate a user in response to real-world situations.
[038] A fact underlying the present invention is that the use of multiple sensors not only opens new ways of interaction but also may lead to a paradigm-shift in interactions between humans, computers and artificial intelligence (Al) resources. The use of multiple sensors and logic for fusion among their inputs plays an important role in order to derive semantic relations for precisely relaying accurate information content meaningful to situational needs of the said end-user. According to principles of the present invention, integration between real and virtual worlds is achieved by combining among sensory modalities available such as light, heat, movement, RFID, sound, image recognition and so on, in a device which operates interactively in modes ranging from instructional, learning, testing, commenting and/or suggestive support to thus query for and present related information context.
[039] From a high-level understanding vantage point, the present invention comprises two primary modules- Learning and Evaluation of which the former is divided into further three sessions: a) group learning, where the user accesses an interactive audio-visual device for sharing basic knowledge of a subject matter
b) adaptive learning in real-world environment, where the user utilizes augmented reality-guided system for understanding processes to be performed by use of all 5 human sensory input. Identifier tags and sensors are integrated for enablement of this session.
c) self practice, where the user may iterate the skill imparted with view to mastering the same. The system assumes a qualitative / quantitative supervisory role and validates process sequence, result obtained and/or suggests corrections / modifications for improvisation
[040] The evaluation module of the present invention forms a qualifying environment for the user after having passed the aforementioned sessions. Here, the system monitors the time, motion and performance for each step of a user in the background and qualifies the same as per pre-defined standards / benchmarks. Errors are logged in the system and reported along with summary of performance at end of the evaluation session.
[041 ] Reference is now made to certain examples which showcase the manner in which principles of the present invention may be employed. These examples are illustrative and not restrictive to ambit of the present invention.
[042] Example 1 : Application environment and system organization
[043] Application environment of the present invention comprises a plurality of users having access to a computing terminal capable of rendering audiovisual content. Computing terminal here would mean and refer personal computers,, smart phones, tablets and any compatible equipment capable of rendering audio-visual content. System implementation of the present invention is intended alternatively among a web-based application, downloadable software/ app being executable on aforesaid computing terminal. Users may register (online/ offline) for use of this platform, upon which each user is given a login ID and password access to the system. As such, hosting of the system of the present invention is chosen either among local memory of the computing equipment or alternatively as a cloud-based service. Learning content comprising a comprehensive library of audio-visual is hosted in this environment, and may be retrieved as per command of the user. Said content may be catalogued by category, sub-category, subject, and language which define their selective retrieval depending on requirements of the user.
[044] Furthermore, system organization of the present invention involves integration of various peripheral sensors for interfacing the real and virtual environments of the user. Said peripheral sensors including those known for heat signatures, smell, RFID, Sound command, their equivalents and their combinations that may be advantageously integrated for qualitative as well as quantitative linkage of the real and virtual environments of the user.
[045] In an isolated embodiment, the system of the present invention is implemented on a laptop and a RFID reader is connected to the USB port of the computer for allowing the user in charge of the laptop to thus have a correlative experience between real as well as virtual worlds while assimilating the selected content.
[046] Example 2: User experience and learning content
[047] The present invention provides for augmented reality-based learning, monitoring and evaluation session(s) to a user accessing said system. Such virtues are enabled via amalgamating experiences of real world and virtual world, wherein all sensory perceptions - sight, hearing, taste, touch, and smell are channelized via the peripheral sensors into the overall learning experience had by the user in relation to the learning content selected.
[048] A novel feature of the present invention is that the audio, video and text of the learning content are synchronized and programmed to render, in a predetermined manner, step-wise for example, in response to a stimulus from the user. Stimuli may include completion of a step, selection of a component and so on by the user.
[049] Another novel feature of the present invention is that the user is connected to virtual world through peripheral sensors and does not necessarily have to touch the computer for proceeding with the learning experience. Stimuli would be accordingly read by the system to progress the learning session, which experience is much more robust, since the user is continually relating the virtual and real worlds, he/ she would benefit from real feel of the article or step as viewed in parallel on-screen of the computing device. This builds a stronger memory of the experience, and hence better holistic learning.
[050] Example 3: Use-case - Assembly protocol for a device
[051 ] The present invention finds applicability in all situations demanding a learning / imbibing / imitation scenario. For example, referring to Figure 1 it can be seen that when a user opts to learn a protocol for the assembly of a device, the system would, in response, selectively retrieve a corresponding audiovisual file from amongst the library for rendition on the user's computing terminal. Said audio-visual file contains application-specific information such as building list of materials, tooling, as well as chronometric parameters for completion of assignment. Once logged in, the user is presented with an interface screen that lists out all the components as well as tools required to be present on the table prior to starting the assembly. Here, the learning content relayed may comprise step-wise instructions / animation for assembly of the device and additional information on each component / step as may be required by the user. Each constituent component(s) / step(s) is/ are qualified by tagging (such as RFID) or user input (such as pressing a key) so that the system may effectively track and proceed with rendition of the learning content as per progress of the user in real world. Thus, the user is benefited from live interactive nature of the simulated live-workshop learning experience so enabled by the present invention.
[052] Example 4: Monitoring and assessment of learning
[053] Monitoring and assessment of learning are crucial features of the present invention. In the use-case of Example 3 above, whenever the user makes a qualitative/ quantitative/ chronological is this the same as sequential mistake, the system shall infer accordingly from the reference qualitative/ quantitative/ chronologic metric in the audio-visual content being rendered, and accordingly shall automatically prompt the user for necessary correction. Such monitoring feature is, in one embodiment, intended to be a selective feature when the system is being run in practice mode. In testing mode, however, there would be no alert/ guidance given to the user and alternatively, the system is programmed to halt should the user commit any error. The system of the present invention is programmed for assessment of performance by a user, by generation of reports upon requisition or completion of the assignment. Thus, if a user follows the correct sequence, the system shall generate a report which will highlight the variance between the estimated time to perform the instructed step(s) and the actual time taken by the participant to perform said step(s). The system shall also monitor how many errors the user has made and how long it took to perform each and every individual step. Figures 2 to 1 1 are screenshots illustrating key steps in performance of the present invention, from which the logic of implementation of the present invention may be readily appreciated. Accordingly, the user is presented with a subscription / login screen/ page as shown in Figure 2 whereby the user may access by signing into the system or register as a first instance single time activity. Next, the user is taken to a second screen/ page as shown in Figure 3, whereby the user is presented with a menu to select assignment/ task/ exam to be undertaken. Upon selection from this menu, the user is taken to another screen/ page as shown in Figure 4, whereby the user is presented with a field list for selection of seat number and language for undertaking said assignment. Other data fields on this screen are populated with assignment reference details, including assignment date and status of assignment to create the virtual workshop learning environment. Completing the actions relating to this screen/ page, the user is prompted with another screen/ page as shown in Figure 4, whereby the user is presented with a bill of materials and tooling necessary as preparation for the assignment to be undertaken. Type, name, specification and number of each item is listed for information of the user who may accordingly verify availability and select to continue further. Completing the actions relating to this screen/ page, the user is prompted with another screen/ page as shown in Figure 5, whereby the user is provided with a list of sequential actions to be undertaken for completion of the task undertaken. A table populated with activity serial number, RFID ID / read count, description of activity to be performed, and estimated time of completion are provided each along with a link to an stored audio-visual / aid file for rendition should the user require to follow for clear understanding of the activity concerned. Referring to Figures 6 and 7, it can be appreciated that the system monitors correct performance of the user, by that it returns a tick mark / correct symbol if the desired action and sequence are duly followed. Otherwise, the user is prompted with a cross mark / incorrect symbol to indicate that the desired action and sequence are not duly followed. Corrective measures are presented upon the latter occurrence, to allow the user to learn the how the desired action and sequence are to be duly followed. After correctly performing last activity of the selected assignment, the user is prompted whether he would like to view report of the assignment undertaken as shown in Figure 9, whereby if the user so confirms, the system directs the user to a report page as shown in Figure 10, whereby the system provides analytics support to generate reports alternatively as user-wise, activity-wise and a summary to thereby present success of the practice session/ adaptive learning or exam undertaken. The exam environment follows generally the same execution logic, however with the difference that no corrective measures are shown to the user upon any mistake/ error in performing the assignment. The reports generated also include an chronometry assessment wherein a comparative account of actual time taken vis-a-vis prescribed estimated time for each activity is presented for qualitative assessment of learning had by the user.
[055] From the disclosures of this document, the utility and applicability of the present invention can be readily appreciated for all strata of users irrespective of their levels of literary, language, technical competency and/ or familiarity. It shall be understood here that the present invention is not limited in application to teaching-aids and that its principles may be applied in essence to other application environments inconsequential of adaptations to specific process and hardware specifications involved. The following examples showcase few of these intended embodiments of the present invention.
[056] Example 5: Use-case - Marketing
[057] The present invention finds applicability in non-learning implementation environments, for example, marketing. Say in a perfume shop, where all items for sale are RFID tagged, the system shall note a user's selection from its corresponding RFID tag, and proceed to execute a programmed function, such as displaying information on that item, or using that item on a screen, sending an SMS to a concerned party and/ or proceeding with billing for the item(s) involved. This way user can correlate three inputs from the virtual world, that is audio, video and text and also two inputs from the real world, that is, touch and smell, thereby making it a robust user experience.
[058] Example 5: Use-case - Vocal training / training of specially-abled persons
[059] The present invention finds applicability in vocal training without a human instructor/ faculty. Accordingly, the learning content may include audio-visual and/or text content, which the user may access for comprehension. Pace of information delivery may be altered by the user to suit his/ her grasping capacity or availability of time while maintaining the ease of operation and resulting in a real-time learning experience. This feature is especially beneficial for specially-abled persons wherein each individual can imbibe the same content as per his/ her convenience and learning speed. Localization of content, including automatic translation of languages as well as inclusion of further detailed files which may be accessed only upon requirement are intended to be covered by ambit of the present invention. Depending on familiarity of a user, certain sections may be skipped or conversely explained in more detail for building a through learning experience. Thus, the present invention is capable of seamless implementation to a user base which may have geographic, time, language, familiarity with the subject, initial skill level, grasping pace, accessibility and physical ability differences.
[060] Example 6: System requirements
[061 ] As mentioned before, the present invention is intended for implementation across a variety of devices capable of rendering multimedia content. However for the sake of enablement, the following system requirements are presently prescribed for performance of the preferred embodiment of the present invention:
Hardware:-
1) Server for deployment of system of the present invention should meet following specifications:-
• RAM :- Minimum 4 Gb, 8 Gb or more recommended
• Processor : - Intel Dual Core, i3, i5 or i7
• Hard Disk :- 500 Gb
2) Client Computer • RAM :- Minimum 1 Gb, 2 Gb or more recommended
• Processor : - Intel Dual Core or higher
3) 100 Mbps LAN
4) Internet facility on server machine as and when required
5) RF ID Reader and Tag as and when required
Software:-
1) Server:-
• Server OS - Windows 7 / Windows 8 / Windows 2008 server
Preferred
• Internet Information Service (Integrated Component of Windows OS)
• SQL Server 2008 R2 - Freeware (Express Edition)
• .Net Framework 4.5 - Freeware
• Java Runtime Support - Freeware
2) Client:-
• OS - Windows XP / Windows 7 / Windows 8
• Browser - Firefox / IE / Google Chrome
[062] As will be seen above, the present invention is capable of implementation either as a web-based application, stand-alone software application, mobile app or other modes of implementation without deviating from essence of performance described in the foregoing narration.
[063] Further applicability of the present invention is limitless, with few use-cases being how to assemble a mobile phone, how to assemble a brake cylinder or a clutch plate or to conduct a practical in chemistry or physics laboratory. The world stands to benefit from the present invention as it makes very easy for the users to learn/ practice a task on their own by doing it themselves and not watching others do it, thereby leading to practically trained users who are hence better equipped to handle a hands-on situation anywhere.
[064] As will be therefore realized, the present invention is capable of various other embodiments and that its several components and related details are capable of various alterations, all without departing from the basic concept of the present invention. Accordingly, the foregoing description will be regarded as illustrative in nature and not as restrictive in any form whatsoever. Modifications and variations of the system and apparatus described herein will be obvious to those skilled in the art. Such modifications and variations are intended to come within ambit of the present invention, which is limited only by the appended claims.

Claims

An augmented reality based method for providing an experiential learning environment for at least one human user, comprising:
Providing an electronically accessible platform having a plurality of multimedia files, of which each corresponds to an assignment available for learning by the at least one human user;
Providing the at least one human user an interface coupled with said electronic platform for allowing the user to execute a multimedia file corresponding to the assignment to be undertaken;
Providing the at least one human user initially as part of execution of the multimedia file chosen, a bill of materials and tooling as preparations necessary for execution of the selected assignment;
Upon verification of the preparations necessary for execution of the selected assignment and as further part of execution of the multimedia file chosen, providing the at least one human user the details of each activity to be undertaken for execution of the selected assignment; and
Upon correct execution of all activities indicated, providing the at least one human user the confirmation of successful completion of the selected assignment being experientially learnt by said human user.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 1 , wherein each of the said plurality of multimedia files contains synchronized textual, and audio-visual descriptions of activities necessary to be undertaken for execution of the corresponding assignment and furthermore a list of components each uniquely identified by corresponding tags.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 1 , wherein the step of providing the at least one human user the details of each activity to be undertaken for execution of the selected assignment is progressive with details of sequential activities being presented only upon successful completion of the preceding activity thereby amalgamating real and virtual experiences of the user who can therefore learn with best inclusion of all his sensory perceptions.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 2, wherein access to said textual, and audio-visual descriptions of activities to be undertaken is optional and left to the discretion of the user depending on his familiarity and speed of grasping of the subject matter so presented.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 1 , wherein each activity is identified uniquely by the component involved by means chosen among visual and non-visual tagging.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 1 , wherein said method is further capable of monitoring progress of the user while executing an assignment by:
returning a correct indication, should the sequence and quantum of any activity, and/ or component selected match analogous values stored in the corresponding multimedia file; and
returning an incorrect indication, should the sequence and quantum of any activity, and/ or component selected not match analogous values stored in the corresponding multimedia file and halting instructions on further activities till correct execution of the activity and/ or selection of component involved.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 6, wherein the step of returning an incorrect indication is accompanied:
By prompting the user with textual, and audio-visual descriptions of corrective measures to be undertaken for due execution of the activity as stored in the corresponding multimedia file, should the user select an practicing mode for experiential learning of the concerned assignment; and
By not prompting the user with textual, and audio-visual descriptions of corrective measures to be undertaken for due execution of the activity as stored in the corresponding multimedia file and only halting instructions on further activities should the user opt for testing his learning on execution of the experientially learnt assignment.
The augmented reality based method for providing an experiential learning environment for human users as claimed in claim 1 , wherein said method is further capable of evaluating execution of an assignment by recording instances and time of execution for each activity and generating reports for:
User-specific execution of assignments; and
Assignment-specific execution by users.
A augmented reality based system for implementing the method of claim 1 , comprising:
A computer readable code, which when executed on a computing device chosen among a personal computer, tablet, mobile equipment, manifests the learning, monitoring and evaluation environment for at least one human user ;
a tag-reader chosen among off the shelf visual and non-visual sensors, their equivalents and their combinations, said reader being communicative with the computing device for identifying each component and thereby the activity associated with said component; and
at least a plurality of tags chosen among RFID, QR code, bar code their equivalents and their combinations, which when affixed to the body of components constituting the bill of materials and tooling, helps the system to identify correct articles utilized by the user while executing any activity of the selected assignment. The augmented reality based system as claimed in claim 1 , being capable of implementation alternatively as a web-based application, an offline software and mobile app
PCT/IB2015/056495 2014-08-28 2015-08-27 Method and system for augmented reality based learning, monitoring and evaluation WO2016030848A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN723/MUM/2014 2014-08-28
IN723MU2014 2014-08-28

Publications (2)

Publication Number Publication Date
WO2016030848A2 true WO2016030848A2 (en) 2016-03-03
WO2016030848A3 WO2016030848A3 (en) 2016-04-28

Family

ID=55400771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/056495 WO2016030848A2 (en) 2014-08-28 2015-08-27 Method and system for augmented reality based learning, monitoring and evaluation

Country Status (1)

Country Link
WO (1) WO2016030848A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020198824A1 (en) * 2019-04-04 2020-10-08 De Araujo Ana Sara Domingos Laboratory teaching equipment
US11132850B1 (en) 2020-03-24 2021-09-28 International Business Machines Corporation Augmented reality diagnostic interface

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366610B (en) * 2013-07-03 2015-07-22 央数文化(上海)股份有限公司 Augmented-reality-based three-dimensional interactive learning system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020198824A1 (en) * 2019-04-04 2020-10-08 De Araujo Ana Sara Domingos Laboratory teaching equipment
US11132850B1 (en) 2020-03-24 2021-09-28 International Business Machines Corporation Augmented reality diagnostic interface

Also Published As

Publication number Publication date
WO2016030848A3 (en) 2016-04-28

Similar Documents

Publication Publication Date Title
Kurilovas Evaluation of quality and personalisation of VR/AR/MR learning systems
Fletcher-Watson A targeted review of computer-assisted learning for people with autism spectrum disorder: Towards a consistent methodology
De Koning et al. Gestures in instructional animations: A helping hand to understanding non‐human movements?
Weldy et al. Training staff to implement brief stimulus preference assessments
De Back et al. Learning in immersed collaborative virtual environments: design and implementation
Munnerley et al. Confronting an augmented reality
Chen et al. Facilitating EFL learners’ active behaviors in speaking: a progressive question prompt-based peer-tutoring approach with VR contexts
JP2011504612A (en) Education method and system including at least one user interface
Srinivasa et al. Virtual reality and its role in improving student knowledge, self-efficacy, and attitude in the materials testing laboratory
Tanner et al. Augmenting a child’s reality: using educational tablet technology
Lower et al. Effects of a tier 3 self-management intervention implemented with and without treatment integrity
Daling et al. Effects of augmented reality-, virtual reality-, and mixed reality–based training on objective performance measures and subjective evaluations in manual assembly tasks: a scoping review
Twyman Emerging technologies and behavioural cusps: A new era for behaviour analysis?
Clements et al. Evaluation of laparoscopic curricula in American urology residency training: a 5-year update
August et al. Artificial intelligence and machine learning: an instructor’s exoskeleton in the future of education
Kamarudin et al. Students’ behavioural intention towards e-learning practices through augmented reality app during COVID-19 pandemic in Saudi Arabia
Awotunde et al. The influence of industry 4.0 and 5.0 for distance learning education in times of pandemic for a modern society
Abbas et al. Ready, trainer… one*! discovering the entanglement of adaptive learning with virtual reality in industrial training: a case study
WO2016030848A2 (en) Method and system for augmented reality based learning, monitoring and evaluation
Rapchak Is your tutorial pretty or pretty useless? Creating effective tutorials with the principles of multimedia learning
Yen et al. Systematic design an intelligent simulation training system: from learn-memorize perspective
Mubin et al. Extended reality: How they incorporated for asd intervention
Zhang et al. Integrating student-made screencasts into computer-aided design education
Bernstein et al. Analysis of instructional support elements for an online, educational simulation on active listening for women graduate students in science and engineering
Almuaqel Virtual Reality and Inclusive Learning of Individuals With Intellectual and Developmental Disabilities: A Review of Findings and the Path Ahead

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15836267

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15836267

Country of ref document: EP

Kind code of ref document: A2